You can translate the content of this page by selecting a language in the select box.
Google Associate Cloud Engineer Exam Preparation: Questions and Answers Dumps
GCP, or the Google Cloud Platform, is a cloud-computing platform that provides users with access to a variety of GCP services. The GCP ACE exam is designed to test a candidate’s ability to design, implement, and manage GCP solutions. The GCP ACE questions cover a wide range of topics, from basic GCP concepts to advanced GCP features. To become a GCP Certified Associate Cloud Engineer, you must pass the GCP ACE exam. However, before you can take the exam, you must first complete the GCP ACE Quizzes below. The GCP ACE Quiz is designed to help you prepare for the GCP ACE exam by testing your knowledge of GCP concepts. After you complete the GCP ACE Quiz, you will be able to pass the GCP Practice Exam with ease.
GCP, Google Cloud Platform, has been a game changer in the tech industry. It allows organizations to build and run applications on Google’s infrastructure. The GCP platform is trusted by many companies because it is reliable, secure and scalable. In order to become a GCP.
The Google Cloud Associate Engineer Salary Average- $145,769/yr
As of September 2020, Google Cloud Platform had 24 regions, 73 zones, and over 100 points of presence in 35 countries. Google Cloud Platform currently holds 4.6% market share. Google Cloud is a strong No. 3 in cloud providers in 2020 with a $11 billion annual revenue run rate, but building out its sales scale and industry approach
An Associate Cloud Engineer deploys applications, monitors operations, and manages enterprise solutions.
The Associate Cloud Engineer exam assesses your ability to: Set up a cloud solution environment, Plan and configure a cloud solution, Deploy and implement a cloud solution, Ensure successful operation of a cloud solution, Configure access and security.
This blog includes the top Google Associate Cloud Engineer exam preparation questions and answers dumps, google cloud questions and answers around the web, google cloud latest news, Google Cloud Developer Cheat Sheet, All Google Cloud Services in 4 words or less
Below are the top Google Associate Cloud Engineer Exam Questions and Answers Dumps:
Question 1: You are a project owner and need your co-worker to deploy a new version of your application to App Engine. You want to follow Google’s recommended practices. Which IAM roles should you grant your co-worker?
A. Project Editor
B. App Engine Service Admin
C. App Engine Deployer
If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.
D. App Engine Code Viewer
Question 2: Your company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should you do?
A. Link a credit card with a monthly limit equal to your budget.
B. Create a budget alert for 50%, 90%, and 100% of your total monthly budget.
C. In App Engine Settings, set a daily budget at the rate of 1/30 of your monthly budget.
D. In the GCP Console, configure billing export to BigQuery. Create a saved view that queries your total spend.
Question 3: You have a project using BigQuery. You want to list all BigQuery jobs for that project. You want to set this project as the default for the bq command-line tool. What should you do?
A. Use “gcloud config set project” to set the default project.
B. Use “bq config set project” to set the default project.
C. Use “gcloud generate config-url” to generate a URL to the Google Cloud Platform Console to set the default project.
We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:
D. Use “bq generate config-url” to generate a URL to the Google Cloud Platform Console to set the default project.
Question 4: Your project has all its Compute Engine resources in the europe-west1 region. You want to set europe-west1 as the default region for gcloud commands. What should you do?
A. Use Cloud Shell instead of the command line interface of your device. Launch Cloud Shell after you navigate to a resource in the europe-west1 region. The europe-west1 region will automatically become the default region.
B. Use “gcloud config set compute/region europe-west1” to set the default region for future gcloud commands.
C. Use “gcloud config set compute/zone europe-west1” to set the default region for future gcloud commands.
D. Create a VPN from on-premises to a subnet in europe-west1, and use that connection when executing gcloud commands.
Question 5: You developed a new application for App Engine and are ready to deploy it to production. You need to estimate the costs of running your application on Google Cloud Platform as accurately as possible. What should you do?
A. Create a YAML file with the expected usage. Pass this file to the “gcloud app estimate” command to get an accurate estimation.
B. Multiply the costs of your application when it was in development by the number of expected users to get an accurate estimation.
C. Use the pricing calculator for App Engine to get an accurate estimation of the expected charges.
D. Create a ticket with Google Cloud Billing Support to get an accurate estimation.
Question 6: Your company processes high volumes of IoT data that are time-stamped. The total data volume can be several petabytes. The data needs to be written and changed at a high speed. You want to use the most performant storage option for your data. Which product should you use?
A. Cloud Datastore
B. Cloud Storage
C. Cloud Bigtable
Question 7: Your application has a large international audience and runs stateless virtual machines within a managed instance group across multiple locations. One feature of the application lets users upload files and share them with other users. Files must be available for 30 days; after that, they are removed from the system entirely. Which storage solution should you choose?
A. A Cloud Datastore database.
B. A multi-regional Cloud Storage bucket.
C. Persistent SSD on virtual machine instances.
D. A managed instance group of Filestore servers.
Question 8: You have a definition for an instance template that contains a web application. You are asked to deploy the application so that it can scale based on the HTTP traffic it receives. What should you do?
A. Create a VM from the instance template. Create a custom image from the VM’s disk. Export the image to Cloud Storage. Create an HTTP load balancer and add the Cloud Storage bucket as its backend service.
B. Create a VM from the instance template. Create an App Engine application in Automatic Scaling mode that forwards all traffic to the VM.
C. Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer.
D. Create the necessary amount of instances required for peak user traffic based on the instance template. Create an unmanaged instance group and add the instances to that instance group. Configure the instance group as the Backend Service of an HTTP load balancer.
Question 9: You are creating a Kubernetes Engine cluster to deploy multiple pods inside the cluster. All container logs must be stored in BigQuery for later analysis. You want to follow Google-recommended practices. Which two approaches can you take?
A. Turn on Stackdriver Logging during the Kubernetes Engine cluster creation.
B. Turn on Stackdriver Monitoring during the Kubernetes Engine cluster creation.
C. Develop a custom add-on that uses Cloud Logging API and BigQuery API. Deploy the add-on to your Kubernetes Engine cluster.
D. Use the Stackdriver Logging export feature to create a sink to Cloud Storage. Create a Cloud Dataflow job that imports log files from Cloud Storage to BigQuery.
E. Use the Stackdriver Logging export feature to create a sink to BigQuery. Specify a filter expression to export log records related to your Kubernetes Engine cluster only.
Question 10: You need to create a new Kubernetes Cluster on Google Cloud Platform that can autoscale the number of worker nodes. What should you do?
A. Create a cluster on Kubernetes Engine and enable autoscaling on Kubernetes Engine.
B. Create a cluster on Kubernetes Engine and enable autoscaling on the instance group of the cluster.
C. Configure a Compute Engine instance as a worker and add it to an unmanaged instance group. Add a load balancer to the instance group and rely on the load balancer to create additional Compute Engine instances when needed.
D. Create Compute Engine instances for the workers and the master, and install Kubernetes. Rely on Kubernetes to create additional Compute Engine instances when needed.
Question 11: You have an application server running on Compute Engine in the europe-west1-d zone. You need to ensure high availability and replicate the server to the europe-west2-c zone using the fewest steps possible. What should you do?
A. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk.
B. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west1-d zone and then move the disk to europe-west2-c. Create a new VM with that disk.
C. Use “gcloud” to copy the disk to the europe-west2-c zone. Create a new VM with that disk.
D. Use “gcloud compute instances move” with parameter “–destination-zone europe-west2-c” to move the instance to the new zone.
Question 12: Your company has a mission-critical application that serves users globally. You need to select a transactional, relational data storage system for this application. Which two products should you consider
B. Cloud SQL
C. Cloud Spanner
D. Cloud Bigtable
E. Cloud Datastore
Question 13: You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of traffic and needs to grow. You decide to add a node. What should you do?
A. Use “gcloud container clusters resize” with the desired number of nodes.
B. Use “kubectl container clusters resize” with the desired number of nodes.
C. Edit the managed instance group of the cluster and increase the number of VMs by 1.
D. Edit the managed instance group of the cluster and enable autoscaling.
Question 14: You created an update for your application on App Engine. You want to deploy the update without impacting your users. You want to be able to roll back as quickly as possible if it fails. What should you do?
A. Delete the current version of your application. Deploy the update using the same version identifier as the deleted version.
B. Notify your users of an upcoming maintenance window. Deploy the update in that maintenance window.
C. Deploy the update as the same version that is currently running.
D. Deploy the update as a new version. Migrate traffic from the current version to the new version.
Question 15: You have created a Kubernetes deployment, called Deployment-A, with 3 replicas on your cluster. Another deployment, called Deployment-B, needs access to Deployment-A. You cannot expose Deployment-A outside of the cluster. What should you do?
A. Create a Service of type NodePort for Deployment A and an Ingress Resource for that Service. Have Deployment B use the Ingress IP address.
B. Create a Service of type LoadBalancer for Deployment A. Have Deployment B use the Service IP address.
C. Create a Service of type LoadBalancer for Deployment A and an Ingress Resource for that Service. Have Deployment B use the Ingress IP address.
D. Create a Service of type ClusterIP for Deployment A. Have Deployment B use the Service IP address.
Question 16: You need to estimate the annual cost of running a Bigquery query that is scheduled to run nightly. What should you do?
A. Use “gcloud query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
B. Use “bq query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
C. Use “gcloud estimate” to determine the amount billed for a single query. Multiply this amount by 365.
D. Use “bq estimate” to determine the amount billed for a single query. Multiply this amount by 365.
Question 17: You want to find out who in your organization has Owner access to a project called “my-project”.What should you do?
A. In the Google Cloud Platform Console, go to the IAM page for your organization and apply the filter “Role:Owner”.
B. In the Google Cloud Platform Console, go to the IAM page for your project and apply the filter “Role:Owner”.
C. Use “gcloud iam list-grantable-role –project my-project” from your Terminal.
D. Use “gcloud iam list-grantable-role” from Cloud Shell on the project page.
Question 18: You want to create a new role for your colleagues that will apply to all current and future projects created in your organization. The role should have the permissions of the BigQuery Job User and Cloud Bigtable User roles. You want to follow Google’s recommended practices. How should you create the new role?
A. Use “gcloud iam combine-roles –global” to combine the 2 roles into a new custom role.
B. For one of your projects, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role. Use “gcloud iam promote-role” to promote the role from a project role to an organization role.
C. For all projects, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role.
D. For your organization, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role.
Question 19: You work in a small company where everyone should be able to view all resources of a specific project. You want to grant them access following Google’s recommended practices. What should you do?
A. Create a script that uses “gcloud projects add-iam-policy-binding” for all users’ email addresses and the Project Viewer role.
B. Create a script that uses “gcloud iam roles create” for all users’ email addresses and the Project Viewer role.
C. Create a new Google Group and add all users to the group. Use “gcloud projects add-iam-policy-binding” with the Project Viewer role and Group email address.
D. Create a new Google Group and add all members to the group. Use “gcloud iam roles create” with the Project Viewer role and Group email address.
Question 20: You need to verify the assigned permissions in a custom IAM role. What should you do?
A. Use the GCP Console, IAM section to view the information.
B. Use the “gcloud init” command to view the information.
C. Use the GCP Console, Security section to view the information.
D. Use the GCP Console, API section to view the information.
Question 21: Your coworker created a deployment for your application container. You can see the deployment under Workloads in the console. They’re out for the rest of the week, and your boss needs you to complete the setup by exposing the workload. What’s the easiest way to do that?
A. Create a new Service that points to the existing deployment.
B. Create a new DaemonSet.
C. Create a Global Load Balancer that points to the pod in the deployment.
D. Create a Static IP Address Resource for the Deployment.
Question 22: Your team is working on designing an IoT solution. There are thousands of devices that need to send periodic time series data for processing. Which services should be used to ingest and store the data?
A. Pub/Sub, Datastore
B. Pub/Sub, Dataproc
C. Dataproc, Bigtable
D. Pub/Sub, Bigtable
Question 23: You have an App Engine application running in us-east1. You’ve noticed 90% of your traffic comes from the West Coast. You’d like to change the region. What’s the best way to change the App Engine region?
A. Use the
gcloud app region set command and supply the name of the new region.
B. Contact Google Cloud Support and request the change.
C. From the console, under the App Engine page, click edit, and change the region drop-down.
D. Create a new project and create an App Engine instance in us-west2.
Question 24: You’ve uploaded some static web assets to a public storage bucket for the developers. However, they’re not able to see them in the browser due to what they called “CORS errors”. What’s the easiest way to resolve the errors for the developers?
A. Advise the developers to adjust the CORS configuration inside their code.
B. Use the
gsutil cors set command to set the CORS configuration on the bucket.
C. Use the
gsutil set cors command to set the CORS configuration on the bucket.
D. Use the
gsutil set cors command to set the CORS configuration on the object.
Question 25: You’ve uploaded some PDFs to a public bucket. When users browse to the documents, they’re downloaded rather than viewed in the browser. How can we ensure that the PDFs are viewed in the browser?
A. This is a browser setting and not something that can be changed.
B. Use the
gsutil set file-type pdfcommand.
C. Set the Content metadata for the object to “application/pdf”.
D. Set the Content-Type metadata for the object to “application/pdf”.
Question 26: You’ve been tasked with getting all of your team’s public SSH keys onto all of the instances of a particular project. You’ve collected them all. With the fewest steps possible, what is the simplest way to get the keys deployed?
A. Use the
gcloud compute ssh command to upload all the keys
B. Format all of the keys as needed and then, using the user interface, upload each key one at a time.
C. Add all of the keys into a file that’s formatted according to the requirements. Use the
gcloud compute project-info add-metadata command to upload the keys.
D. Add all of the keys into a file that’s formatted according to the requirements. Use the
gcloud compute instances add-metadata command to upload the keys to each instance
Question 27: What must you do before you create an instance with a GPU? ( Pick at least 2)
A. You must only select the GPU driver type. The correct base image is selected automatically.
B. You must select which boot disk image you want to use for the instance.
C. Nothing. GPU drivers are automatically included with the boot disk images.
D. You must make sure the selected image has the appropriate GPU driver is installed
Question 28: Which of the following is a valid use case for Flow Logs?
A. Blocking instances from communicating over certain ports.
B. Network forensics.
C. Proxying SSL traffic.
D. Serving as a UDP relay.
Question 29: Which of the following is a valid use case for using a primitive role?
A. When granting permission to a development project or to the development team.
B. When there are more than 10 users.
C. When creating a custom role requires more than 10 permissions.
D. When granting permission to a production project, or to a third-party company.
Question 30: Your security team has been reluctant to move to the cloud because they don’t have the level of network visibility they’re used to. Which feature might help them to gain insights into your Google Cloud network?
C. Flow Logs
D. Firewall rules
Question 31: You’re in charge of setting up a Stackdriver account to monitor 3 separate projects. Which of the following is a Google best practice?
A. Use the existing project with the least resources as the host project for the Stackdriver account.
B. Use the existing project with the most resources as the host project for the Stackdriver account.
C. Create a new, empty project to use as the host project for the Stackdriver account.
D. Use one of the existing projects as the host project for the Stackdriver account.
Question 32: You’re attempting to set up a File based Billing Export. Which of the following components are required?
A. A Cloud Storage bucket.
B. A BigQuery dataset.
C. A report prefix.
D. A Budget and at least one alert.
Question 33: You’ve installed the Google Cloud SDK natively on your Mac. You’d like to install the
kubectl component via the Google Cloud SDK. Which command would accomplish this?
A. sudo apt-get install kubectl
B. gcloud components install kubectl
C. pip install kubectl
D. brew install kubectl
Question 34: You’re attempting to set the default Compute Engine zone with the Cloud SDK. Which of the following commands would work?
A. gcloud config set compute/zone us-east1-c
B. gcloud set compute\zone us-east1
C. gcloud set compute/zone us-east1
D. gcloud config set compute\zone us-east1
Question 35: You’ve been hired as a Cloud Engineer for a 2-year-old startup company. Recently they’ve had a bit of turn over, and several engineers have left the company to pursue different projects. Shortly after one of them leaves, it is found that a core project seems to have been deleted. What is the most likely cause for of the project’s deletion?
A. You’ve been the victim of the latest malware that deletes one project per hour until you pay them to stop.
B. One of the engineers intentionally deleted the project out of spite.
C. The project was created by one of the engineers and not attached to the organization.
D. A failed attempt to pay the bill resulted in Google deleting the project.
Question 36: You’re using Stackdriver to set up some alerts. You want to reuse your existing REST-based notification tools that your ops team has created. You want the setup to be as simple as possible to configure and maintain. Which notification option would be the best option?
A. Use a Slack bot to listen for messages posted by Google.
B. Send it to an email account that is being polled by a custom process that can handle the notification.
C. Send notifications via SMS and use a custom app to forward them to the REST API.
Question 37: A member of the finance team informed you that one of the projects is using the old billing account. What steps should you take to resolve the problem?
A. Submit a support ticket requesting the change.
B. Go to the Billing page, locate the list of projects, find the project in question and select Change billing account. Then select the correct billing account and save.
C. Go to the Project page; expand the Billing tile; select the Billing Account option; select the correct billing account and save.
D. Delete the project and recreate it with the correct billing account.
Question 38: You’re using a self-serve Billing Account to pay for your 2 projects. Your billing threshold is set to $1000.00 and between the two projects you’re spending roughly 50 dollars per day. It has been 18 days since you were last charged. Given the above data, when will you likely be charged next?
A. On the first day of the next month.
B. In 2 days when you’ll hit your billing threshold.
C. On the thirtieth day of the month.
D. In 12 days, making it 30 days since the previous payment.
Question 39: You have 3 Cloud Storage buckets that all store sensitive data. Which grantees should you audit to ensure that these buckets are not public?
[appbox appstore 1574395172-iphone screenshots]
Question 40: You’ve been asked to help onboard a new member of the big-data team. They need full access to BigQuery. Which type of role would be the most efficient to set up while following the principle of least privilege?
A. Primitive Role
B. Custom Role
C. Managed Role
D. Predefined Role
Question 41: Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention. What should you do?
A. Create an export to the sink that saves logs from Cloud Audit to BigQuery.
B. Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
C. Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
D. Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.
Question 42: You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?
A. Create a Cloud Memorystore for Redis instance with 32-GB capacity.
B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.
Question 43: You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the application with access to Cloud Storage. What should you do?
A. 1. Use nslookup to get the IP address for storage.googleapis.com. 2. Negotiate with the security team to be able to give a public IP address to the servers. 3. Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com.
B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud. 2. In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance. 3. Configure your servers to use that instance as a proxy to access Cloud Storage.
C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine. 2. Create an internal load balancer (ILB) that uses storage.googleapis.com as backend. 3. Configure your new instances to use this ILB as proxy.
D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in Google Cloud. 2. Use Cloud Router to create a custom route advertisement for 184.108.40.206/30. Announce that network to your on-premises network through the VPN tunnel. 3. In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.
Question 44: You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What should you do?
A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic. 2. Call your application on Cloud Run from the Cloud Function for every message.
B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run. 2. Create a Cloud Pub/Sub subscription for that topic. 3. Make your application pull messages from that subscription.
C. 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint.
D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal. 2. Create a Cloud Pub/Sub subscription for that topic. 3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.
Question 45: You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs. What should you do?
A. Deploy the container on Cloud Run.
B. Deploy the container on Cloud Run on GKE.
C. Deploy the container on App Engine Flexible.
D. Deploy the container on GKE with cluster autoscaling and horizontal pod autoscaling enabled.
Question 46: Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do?
A. Link the acquired company’s projects to your company’s billing account.
B. Configure the acquired company’s billing account and your company’s billing account to export the billing data into the same BigQuery dataset.
C. Migrate the acquired company’s projects into your company’s GCP organization. Link the migrated projects to your company’s billing account.
D. Create a new GCP organization and a new billing account. Migrate the acquired company’s projects and your company’s projects into the new GCP organization and link the projects to the new billing account.
Question 47: You built an application on Google Cloud that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data.
You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do?
A. Add the support team group to the roles/monitoring.viewer role
B. Add the support team group to the roles/spanner.databaseUser role.
C. Add the support team group to the roles/spanner.databaseReader role.
D. Add the support team group to the roles/stackdriver.accounts.viewer role.
Question 48: For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Cloud Logging agent on all the instances. You want to minimize cost. What should you do?
A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances. 2. Update your instancesג€™ metadata to add the following value: logs-destination: bq://platform-logs.
B. 1. In Cloud Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink. 2. Create a Cloud Function that is triggered by messages in the logs topic. 3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
C. 1. In Cloud Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.
D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset. 2. Configure this Cloud Function to create a BigQuery Job that executes this query: INSERT INTO dataset.platform-logs (timestamp, log) SELECT timestamp, log FROM compute.logs WHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) 3. Use Cloud Scheduler to trigger this Cloud Function once a day.
Question 49: You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a
DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do?
A. Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.
B. Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.
C. With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.
D. In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.
Question 50: You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do?
A. Use service account credentials in your on-premises application.
B. Use gcloud to create a key file for the service account that has appropriate permissions.
C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center.
II- Google Cloud Questions and Answers
Yes, Google App Engine(GAE) , a fully managed PaaS is 100% worthy if :
you want ready and quick platform to build web applications and mobile backends on Cloud scale with very low cost start
want to get rid of the burden managing and provisioning Infrastructure, application security and scale
are fine with almost no control over web server and application software like Database, File storage, Messaging mechanism. You have to live with what GAE offers and choose from choices available. Forget about customization!
can live with fixed set of language runtimes like Node.js, Java, Ruby, C#, Go, Python, ….
Google App Engine is PaaS platform (Platform as a service) that is used to deploy large-scale web and mobile apps. So, the sites are:Disney
Snapchat, YouTube, Accenture, Practo, Samba Tech, Buddy, Kam Bam, Coco Cola, The New York Times, Stack
It is one of the most trusted cloud platform used by top companies. We will get to see many more sites deploying Google App Engine for their web & app hosting.
Why do some people believe that Google Cloud Compute Engine instances are containers like Docker, when in fact they are full virtual machines using the Linux Kernel Virtual Machine hypervisor?
Well, I believe it because I met and discussed it with some of the Google engineers responsible for that area. And I am not special in that respect: it’s not a secret. Here’s the missing link: Google runs KVM in a container. To be crystal clear, a container is not an actual Linux construct. There is no Linux system call you can make to create a container. Instead, it is the term we give to the usage of Linux primitives like namespaces and cgroups to partition applications into their own Linux-level virtual compute space. Except we don’t call it that, we call it a container. So, at the lowest level, Google’s infrastructure schedules containers. To create a virtual machine, google runs KVM in one of those containers. So the document you link to is absolutely valid *and* KVM runs in a containe(more)
No, but to be honest, I think that’s what their gaming system is for. Reverse marketing. They don’t expect it to be a hit, but if they’re almost good enough for gaming, then they’re certainly good enough for me. They’re not aiming for gamers, but everyone else. There is definitely a market for public VDI. I was working on that concept ten years ago, but I didn’t have the resources to pull it off. Back then, watching Youtube videos on the client was not feasible. These days, you could probably kill the whole PC industry if you had the resources. If Google develops something like JackPC that is able to connect to their Stadia and provide a VM, I would recommend it to my father, but I wouldn’t use it, because I still have a long life to live and I’m not giving it to Google. But if they made i(more)
Google runs Linux on its hardware (AKA “Linux on bare metal”). As part of that Linux image, it has its own Linux container implementation based on cgroups and namespaces. In Google Cloud platform, it then runs KVM inside a Linux container, and the VMs run on top of KVM. So the hierarchy is VM->KVM->Linux->Bare metal(more)
i would suggest you to read this document thoroughly, so that you can understand logging into Compute instances is not that tedious… 🙂 Connecting to instances using advanced methods | Compute Engine Documentation | Google Cloud(more)
Lets have two variables (although they can be more): ease of administration, constraints of use. App Engine: from your side there is almost no administration, you write code (with somewhat limited possibilities), upload and basically don’t have other major concerns (well maybe how to lower your bills if your app gets popular)) all the rest (storage, scaling, installing programs etc.) handles app engine Compute Engine is virtual machine with preinstalled OS and you can do with it whatever you want. That means you have to install all programs by yourself but you are not limited with what can you do with it. Container Engine is another level above Compute Engine, i.e. it’s cluster of several Compute Engine instances which can be centrally managed. There is also one level between GAE and GCE:(more)
Which is more cost effective Amazon S3 and EC2 vs Google cloud storage and compute engine for a large scale website?
Both of them have almost the same price but they have different type of discounts. For instance AWS has “Reserved Instance” discount model for 1 or 3 year purchase. You have to pay almost 1/3 of the period as pre-paid and you’ll get %30–60 discounts depends on period you choose and EC2 instance type you have. Google Cloud has a monthly discount model and it applies automatically if you use a compute engine more than 10 days in a month. If you run the compute instance during the month you may have %30 discount without pre-pay anything. So both of them have discounts but in a different financial payment model. As an alternative, you can checkout DigitalOcean for the affordable prices.(more)
They’re three different approaches to running services on virtual machines. AppEngine is designed around automatic scaling of services. There’s actually two different flavors of AppEngine entirely : the “standard environment,” which is a sandbox, and the “flexible environment,” which is a more traditional (though still not traditional!) VM running in a Docker container. Both versions are designed to automatically spawn more instances of your service in response to increases in load, and isolate you from a lot of hard SRE problems. Compute Engine is just plain old virtual machines. If you want to run an instance of a VM with a certain amount of memory and hard drive space, running under a given version of Linux, and not have to worry about physical equipment, Compute Engine is for you. (Mor(more)
Which is more cost effective Amazon S3 and EC2 vs Google cloud storage and compute engine for a small scale website?
I do not understand why the question asks about both EC2/Compute Engine and Cloud-Storage/S3. Cloud-Storage/S3 is used to serve static websites. The EC2/computer engine is typically used to serve dynamic content (However, it can serve static websites too). I would try and figure out which one of these suits your use better. In both the cases, however, GCP is cheaper (You also get credits to use it free for one year) – they even have a page where you can calculate how much you save moving from AWS to GC → Google Cloud Platform Pricing Calculator | Google Cloud Platform | Google Cloud (The only case where I have seen GCP is more expensive is when it comes to hosting proprietary licensed DBs like MS SQL).(more)
We started offering our hadoop service on GCE. We ran hadoop workloads with a root persistent disk(storage over network) and an additional persistent disk of size 500 GB. Consistently, we observed that the performance is better than other leading cloud providers where we used local disks of the instance. Few months back, GCE was offering scratch disks. They decided to replace scratch disks with persistent disk when they went GA. This fact clearly shows that there was enough confidence, that persistent disks were performing well compared to scratch disks. (if thats not the case, Google would not have made this bold move and continued offering scratch disks also like AWS) This performance must partly be attributed to their networking stack. Its considered the best out there in the(more)
Google has been building and using its own private cloud since the start of the company. They have always been known for about setting the standards in many industries, and public cloud is what happening. For years, people would always wanted to use their cloud technology (Colossus, BigTable, GAE, etc..). Strategically, Google knows that if they focus more on providing and marketing their public cloud based on what they currently use, people who look up to them would see it as standard, and it’s all good for business. Another reason is, with recent acquisitions (for instance, Nest), Google realized that those successful startups they acquire use AWS more than GCP. Telling the existing development teams to migrate to GCP will disrupt the team (just like Microsoft’s acquisition of Minecraft(more)
Google Cloud Storage If you know how to deploy Django on GCP, and you know how to specify an alternative backend for Django… close enough?(more)
There is no official date so far. But there is always Azure and AWS.(more)
How do I set up the Google Cloud CDN on my existing Google Compute Engine that is hosting a WordPress website?
I strongly suggest to move your installation to google app engine instead. It’s easy, it will leverage your maintenance costs, and it will auto scale when needed. As for cdn, you can host static files on google storage that is already managed with google cdn behind the scene. To go with WordPress on google app engine there are simple tutorials like this: GoogleCloudPlatform/php-docs-samples I did this setup many times with great success. I also wrote a small tutorial to speed up your wp installation with memcache (that comes as a free service in google app engine). giona69/wordpress-made-extremely-fast Good work!(more)
I just want to explain in a way that a person who don’t have any prior knowledge on containers and clusters should be able to understand what kubernetes is and what it does. First we understand why container. * Let’s say you want to gift a cycle to your kid on his birthday. Now if the cycle is delivered to you with parts separated and a manual that describes how to attach the parts. Well you may end up screwing things. * Instead what if the cycle itself is ready-made and packed in a container and delivered to your home address, with no manual intervention required? . Ain’t that awesome. * * The individual parts of cycle is the dependencies of the project which may work at one place and not the other. * * The cycle company is the developers hub, and the client here is the one using our product. * * To solve thi(more)
Indeed Kubernetes and Docker are two different things that are related to each other. Let’s have a look; After getting used to Docker, you realize that there should be ‘Docker run’ commands or something like that to run many containers across heterogeneous hosts. Here is when Kubernetes or k8s comes in. It solved many problems that Docker had. Kubernetes is based on Google’s container management system- Borg and language used is Go. It is a COE (Container Orchestration Environment) for Docker containers. The function of COE is to make it sure that application is launched and running properly. If in case a container fails, Kubernetes will spin up another container. It provides a complete system for running so many containers across multiple hosts. It has load balancer integrated and uses etc(more)
Kubernetes is a vendor-agnostic cluster and container management tool, open-sourced by Google in 2014. It provides a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. Above all, this lowers the cost of cloud computing expenses and simplifies operations and architecture. Kubernetes and the Need for Containers Before we explain what Kubernetes does, we need to explain what containers are and why people are using those. A container is a mini-virtual machine. It is small, as it does not have device drivers and all the other components of a regular virtual machine. Docker is by far the most popular container and it is written in Linux. Microsoft also has added containers to Windows as well, because they have become so popular. The bes(more)
Despite the little time that Kubernetes has in the market, this tool has become a reference in terms of the management and allocation of service packages (containers) within a cluster. Initially developed by Google, Kubernetes emerged as an open-source alternative to the Borg and Omega systems, being officially launched in 2015. What is Kubernetes? Kubernetes is an open-source tool also designated as an orchestrator, which is used to carry out the distribution and organization of workloads in the form of containers. This, in order to maintain the availability and accessibility of existing resources to customers, as well as stability when carrying out the execution of multiple services simultaneously. Through this action scheme, Kubernetes makes it possible for numerous servers of different typ(more)
There are a countless number of debates, discussions and social clatter talking about Kubernetes and Docker. Nevertheless, Kubernetes and Docker Swarm are not rivals! Both have their own pros and cons and can be used depending on your application requirements. Benefits & drawbacks of Kubernetes Benefits of Kubernetes: * Kubernetes is backed by the Cloud Native Computing Foundation (CNCF). * Kubernetes have an impressively huge community among container orchestration tools. Over 50,000 commits and 1200 contributors. * Kubernetes is an open source and modular tool that works with any OS. * Kubernetes provides easy service organization with pods (Start your Kubernetes journey to resilient and highly available deployments – Free consultation on Kubernetes) Drawbacks of Kubernetes * When doing it yourself, K(more)
If you already ‘know’ Docker containers, then spin up a Kubernetes system (Not as hard as you think – check out installing Minikube) read through the docs for Kubernetes and start trying out some of the capabilities for yourself. The (free) Katacoda is a browser-based learning platform has a number of ‘scenarios’ that run on pre-deployed Kubernetes system. Follow this link to Katacoda and then search for “Kubernetes.” Note that you can copy-paste your way through most of the exercises in a minute or two, learning is on you to read and understand what it is you are pasting. Online resources such as the “Awesome Kubernetes” or “Awesome Docker” lists (you do need to have some understanding of Docker to work with Kubernetes) will give you a pile of options – free and paid – to get into greater(more)
When Linux containers appeared at the time of LXC, a lot of people in the IT world saw them as something marvelous, they offered a way of packaging software with all their dependencies and running then in any other Linux machine. Much like virtual machines, but without the performance losses. But the truth was that they weren’t widely used, they required some plumbing to make them work, and there were no standard way to distribute the images. Then docker appeared, adding to existing container technologies a workflow for building and sharing images and a common interface to start containers. This came to popularize these technologies, but they weren’t still widely used for production systems, mainly because it was not so advantageus to have just another packaging system for production. And t(more)
There is no one way to compare because they are mostly different things. That said, I’ll first try and define the need for each one of these and link them together. Let’s start with the bottom of the stack. You need infrastructure to run your servers. What could you go with? You can use a VPS provider like DigitalOcean, or use AWS. What if, for some non-technical reason, you can’t use AWS? For instance, there is a legal compliance that states that the data I store and servers I run are in the same geography as the customers I serve, and AWS does not have a region for the same? This is where OpenStack comes in. It is a platform to manage your infrastructure. Think of it as an open source implementation of AWS which you can run on bare metal data centers. Next, we move up the stack. We want an(more)
Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF). It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently. Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are sc(more)
The basic idea of Kubernetes is to further abstract machines, storage, and networks away from their physical implementation. So it is a single interface to deploy containers to all kinds of clouds, virtual machines, and physical machines. Container Orchestration & Kubernetes Containers are virtual machines. They are lightweight, scalable, and isolated. The containers are linked together for setting security policies, limiting resource utilization, etc. If your application infrastructure is similar to the image shared below, then container orchestration is necessary. It might be Nginx/Apache + PHP/Python/Ruby/Node.js app running on a few containers, communicating with the replicated database. Container orchestration wi(more)
As seen in the following diagram, Kubernetes follows client-server architecture. Wherein, we have master installed on one machine and the node on separate Linux machines. The key components of master and node are defined in the following section. Kubernetes – Master Machine Components Following are the components of Kubernetes Master Machine. etcd It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all. API Server Kubernetes is an API server which provides all the operation on cluster usi(more)
Kubernetes service discovery find services through two approaches: 1. Using the environment variables that use the same conventions as those created by Docker links. 2. Using DNS to resolve the service names to the service’s IP address. Environment Variables Kubernetes injects environment variables for each service and each port exposed by the service. This makes it easy to deploy containers that use Docker links to find their dependencies. For example, if we are exposing a RabbitMQ service, we can locate it using the RABBITMQ_SERVICE_SERVICE_HOST and RABBIT_MP_SERVICE_SERVICE_PORTvariables. Other environment variables are also exposed to support this. The easiest way to find out what environment variables are exposed are(more)
Docker is open source tool has been designed to create applications as small container on any machine. By using docker development , deployment is too easy is for developers . We can say this are very light-weight in size which includes minimal OS and your application . In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application. Kubernets : Kubernetes is a powerful system, developed by Google, for managing containerized applications in a clustered e(more)
Container cluster management system is called Kubernetes. After getting used to Docker, you realize that there should be ‘Docker run’ commands or something like that to run many containers across heterogeneous hosts. Here is when Kubernetes comes in. It provides a complete system for running different containers across multiple hosts. Kubernetes is based on Google container management system Borg and language used is Go.Basically, Google uses three languages; 1. C/C++ 2. Java 3. Python C and C++ might be little tough for new users. Java is less attractive as compared to Go for Kubernetes because of its heavy runtime download. Python is great but dynamic typing of Python is challenging for system software. Go is the best choice as it has great sets of system libraries. It has fast testing and building too(more)
Hi there, I believe container orchestration is one of the best features of Kubernetes. I will tell you why? I am sharing a section of my recently posted article on Level Up. For complete article, please visit : The Kubernetes Bible for Beginners & Developers – Level Up So here is my answer : How Kubernetes Solves the Problem? After discussing the deployment part of Kubernetes, it is necessary to understand the importance of Kubernetes. Container Orchestration & Kubernetes Containers are virtual machines. They are lightweight, scalable, and isolated. The containers are linked together for setting security policies, limiting resource utilization, etc. If your application infrastructure is similar to the image shared below, then container orchestration is necessary. It might be Nginx/Apache + PHP/(more)
Hi, I found this cheat sheet on Kubernetes. Kubernetes kubectl CLI Cheat Sheet This cheat sheet encloses first-aid commands to configure the CLI, manage a cluster, and gather information from it. On downloading the cheat sheet, you will find out how to:Create, group, update, and delete cluster resources Debug Kubernetes pods—a group of one or more containers with shared storage/network and a specification for running the containers Manage config maps, a primitive to store a pod’s configuration, and secrets, a primitive to store such sensitive data as passwords, keys, certificates, etc. You will learn how to use Helm—a package manager to define, install, and upgrade complex Kubernetes apps. Moreover, here you can find the Kubernetes training courses – Custom Hands-On IT Training Courses… Plus -(more)
Both Kubernetes and Docker are DevOps tools. Docker was started in 2013 and is developed by Docker, Inc. Kubernetes was introduced as a project at Google in 2014, and it was a successor of Google Borg. Kubernetes can run without docker, and docker can run without kubernetes. But kubernetes has great benefits in running along with docker. What is Kubernetes Kubernetes is a container management system developed by Google. It is an open-source, portable system for automatic container deployment and management. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In practice, Kubernetes is most commonly used alongside Docker for better control and implementation of containerized applications. Features of Kubernetes * Automates various manual proces(more)
Yes and no. Especially for Kubernetes (which is not THAT hard, but has a steep learning curve in the beginning), I doubt that there is any certification that can tell you stuff you cannot learn for free. You can set up a Kubernetes cluster on DO for $20/month or even on you laptop to actually try out things. Create a few Helm charts for your pet applications and you have a good working knowledge of Kubernetes. BUT: How can an employer judge your level of knowledge? And this is where certifications get interesting. So basically, you are trading money for an increased chance of employment, all other things equal. Furthermore, at a certain size of projects, customers require their suppliers to have a certain number of people certified in the relevant technologies — so that they can rest assure(more)
This is a good question. I would like to say that Borg and Kubernetes both have the same kind of tasks. But Google is promoting Kubernetes for now. As such, it offering good features as well. The most important thing of all, Kubernetes has an active online community. The members of this community meet-up online as well as in person, in major cities of the world. An international conference “KubeCon” has proved to be a huge success. There is also an official Slack group for Kubernetes. Major cloud providers like Google Cloud Platform, AWS, Azure, DigitalOcean, etc also offer their support channels. For more details on Kubernetes, please visit my articles : https://www.level-up.one/kubernetes-bible-beginners/ How Does The Kubernetes Networking Work? : Part 1 – Level Up How Does The Kubernetes Ne(more)
Kubernetes is infrastructure abstraction for container manipulation. In Kubernetes there are many terms that conceptualize the execution environment. A pod is the smallest unit deployable in kubernetes. You can see it as an application that runs one container or multiple that work together. Pods have volumes, memory and networking requirements. Pods have a unique Id and can die at any minute so kubernetes provides a higher hierarchy abstraction called Service. A Service is a logical set of pods that are permanent in the cluster and offer functionality. Pods are accesible through the service names in the network of the cluster. When a pod dies, kubernetes automatically runs a new pod of the service (depending on replica configuration) to keep the service offering functionality. There are man(more)
What companies are planning to or are already using Kubernetes in production?Alexandra Dikusar·July 29, 2019Senior Digital Marketing Manager
Kubernetes’ increased adoption is showcased by a number of influential companies which have integrated the technology into their services. Let us take a look at how some of the most successful companies of our time are successfully using Kubernetes. Tinder’s move to Kubernetes Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. What did they do? Kubernetes – Yes, the answer is Kubernetes. Tinder’s engineering team solved interesting challenges to migrate 200 services and run a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running containers. Reddit’s Kubernetes story Reddit is one of the top busiest sites in the world. Kubernetes forms the core of Reddit’s internal Infrastructure. From many years, the Reddit infrastructure tea(more)
Our CTO insisted to choose Docker Swarm, instead of Kubernetes, for a container orchestration because Docker Swarm is simpler and easier to learn. How do I convince people and explain to them the benefits of Kubernetes?
Here is a way you could convince him. Docker is dead. It’s not technically dead, but in reality, it’s a walking zombie. I’ll explain why. AWS is one of the best platforms for infrastructure and there is GCE and Azure, but AWS is the standard, the most capable platform from all the cloud architectures. AWS is integrating Kubernetes into it’s system and you might ask what are the benefits and why would it do that. Kubernetes is basically a competitor to AWS. It allows you to write infrastructure using YAML files and deploy them on a cluster. The only drawback right now is that you cannot provision servers using Kubernetes because it sits at a higher level in the abstraction stack. The servers are below it. However, with EKS (elastic kubernetes service). AWS has integrated all sorts of primativ(more)
If the developer put together a working solution then keep using it, thank them for the effort, and provide some private coaching on how to get buy-in so things go more smoothly in the future. Startups spawn serious problems that don’t end up on the roadmap as they should, and you’re better off with people taking initiative then fixing them. Otherwise the stake holders need to decide on a containerization solution, preferably coming to that conclusion by themselves or at least believing they did. That’s probably Kubernetes (from Google which knows how to build and run things) and docker where you already have one enthusiastic engineer willing to own the project, although they should be able to provide reasonable arguments on why that’s the best option for containerization and deployment. Peo(more)
Kubernetes is meant to simplify things and this article is meant to simplify Kubernetes for you! Kubernetes is a powerful open-source system that was developed by Google. It was developed for managing containerized applications in a clustered environment. Kubernetes has gained popularity and is becoming the new standard for deploying software in the cloud. Learning Kubernetes is not difficult (if the tutor is good) and it offers great power. The learning curve is a little steep. So let us learn Kubernetes in a simplified way. The article covers Kubernetes’ basic concepts, architecture, how it solves the problems, etc. What Is Kubernetes? Kubernetes offers or in fact, it itself is a system that is used for running and coordinating applications across numerous machines. The system manages the(more)
Kubernetes and Docker are two different tools used for DevOps. Let me explain each in brief. Kubernetes is an open-source platform used for maintaining and deploying a group of containers. In practice, Kubernetes is most commonly used alongside Docker for better control and implementation of containerized applications. Docker is a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments. Features of docker – Multiple containers run on the same hardware High productivity Maintains isolated applications Quick and easy configuration Differences between Kubernetes and Docker 1. In Kubernetes, applications are deployed as a combination of pods, deployments, and services. In Docker, applications are deployed i(more)
Kubernetes is built in three layers with each higher layer hiding the complexity found in a lower layer -Application Layer(Pool and Services), Kubernetes Layer and Infrastructure Layer. Pods are a part of Kubernetes layer. A pod is one or more containers controlled as a single application It encapsulates application containers, storage resources, a unique network ID and other configuration on how to run the containers A Pod represents a group of one or more application containers bundled up together and are highly scalable If a pod fails, Kubernetes automatically deploys new replicas of the pod to the cluster Pods provide two different types of shared resources -networking and storage You can also get a good understanding of content quality by watching Simplilearn’s youtube videos. Here are some(more)
Kubernetes, also sometimes called K8S (K – eight characters – S), is an open source orchestration framework for containerized applications that was born from the Google data centers.(more)
Docker, absolutely learn that first. Docker Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package. And here comes the race between choosing an orchestration tool : Overview of Kubernetes Kubernetes is based on years of Google’s experience of running workloads at a huge scale in production. As per Kubernetes website, “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.” Overview of Docker Swarm Docker swarm is Docker’s own container’s orchestration. It uses the standard Docker API and networking, making it easy to drop into(more)
A node is the smallest unit of hardware in Kubernetes, also known as a minion. It is a representation of a single machine in the cluster. It is a physical machine in a data center or virtual machine hosted on a cloud provider like Google Cloud Platform. Each node has the services required to run a pod and is managed by the master components in Kubernetes architecture. The services given by a Kubernetes Node include the container runtime(Docker), Kubelet, and Kube-proxy. To know more about Node in Kubernetes, watch this video on Kubernetes Architecture: Hope this helps!(more)
What are some major drawbacks of Docker and Kubernetes?Stuart Charlton·March 23, 2017I help large companies design their container clouds(Disclaimer-
They’re both good technologies with huge opportunities and potential ahead) Docker is overhyped for its relative youth, and is really a moderate set of wrapper capabilities around the Linux kernel. Operational understanding is scarse and conflicting. Requires a lot of deep street knowledge to use effectively in production. Lots of subtle performance and reliability challenges with eg. Networking, storage. Often subtle breaking changes beteeen releases. Installing and operating Kubernetes is not for the faint of heart. Assumes you can “bring your own cluster”. Pace of change and improvement on core k8s is astounding (good and bad). Using Kubernetes is relatively white box, ie. you really need to know what’s going on under the covers to a degree especially if you’re not using GKE.(more)
Used on GCP and Physical ServersA Kubernetes cluster is a group of ‘machines’ that are either on the same network segment or set up to communicate with each other over the network with low latency, and run Kubernetes software. Kubernetes software runs as a ‘service’ or ‘daemon’ on each machine in the cluster and this causes the host machine to either act as a ‘master’ or a ‘slave’ node within the cluster. During the Kubernetes cluster set up process, the master is created first and toward the end of the install process a connection command is displayed or logged to the system. This should then be run on each additional node once the base Kubernetes software has been installed. Some ‘magic’ then takes place and the new node links up with the master node to form a logical cluster. Commands can then be run on the master node t(more)
I think containers are the model of potential delivery now. We make packaging an application with their required infrastructure much easier. Tools like Docker provide containers, but also software are needed to handle items such as replication, failures and APIs for automating deployment on multiple machines. At the beginning of 2015, the status of clustering platforms such as Kubernetes and Docker Swarm was highly unstable. We tried to use them and began with Docker Swarm. Amid the news in recent weeks, several businesses have purchased container or micro-service firms to boost their portfolio for what lies ahead. What is this a really important topic now ? Amid the news in recent weeks, several businesses have purchased container or micro-service firms to boost their portfolio for what lie(more)
Lets forget all about technical stuff, lets discuss this in a way that a non-technical guy understands. * You are owner of a building and you have 5 spots where people can enter your building and you want 5 security guards guarding the spots. All good till now. * * Now consider one of the guard was out of service for 2 hours due to some personal reasons. Now as a building owner its your responsibility to guard or employ another guard replacing the existing. Do you like to be manually interrupted from your task to look after who is out and whom to replace. * * No, no one likes to be. Now the solution could be, go to a third party vendor who provides 24*7 availability of the guards. Its the responsibility of the vendor to make 24*7 availability based on the configuration set(in this case guards guarding(more)
While researching for a project, I looked into all of the available books on Kubernetes. Here’s a quick roundup. (Feel free to suggest more!) * Golden Guide to Kubernetes Application Development This book’s for web app developers who just want a short, sharp guide to grok Kubernetes. It’s also really great for people trying to get their CKAD certification. (Disclaimer: I wrote this. Yeah, this is one of those Quora answers… but I hope it’s still useful.) * The Kubernetes Book Probably the most popular and established book on Kubernetes. It’s great for new developers trying to learn Kubernetes. The author is known for his video courses as well. * Kubernetes: Up and Running Definitely written by the most authoritative authors of any book here. Kelsey Hightower is a Google dev advocate for Kubernetes(more)
Can I use Kubernetes without having a Docker registry?John Starmer·July 23, 2019Director of Education at Kumulus Technologies
It is indeed possible to use Kubernetes with out Docker. The Kubernetes community has long recognized the problem with being tied to Docker’s quasi-proprietary (and somewhat arbitrarily developed) container runtime. Early on there was support for an alternative runtime called rkt (pronounced like rocket). However, going down the path of creating separate solutions for any and every new container runtime that might get developed would be a lot of work and a bit like reinventing the wheel for each runtime. To break free of the Docker runtime constraint, the CRI (Container Runtime Interface) that allows you to use other container runtimes (e.g. ContainerD, CRI-O, etc.). The CRI plugin is a shim sits between the Kubernetes kubelet and container runtime and acts as a universal translator. Read more…
How would you explain Kubernetes to a 10-year-old?Jos Buurman·February 19Created and maintained several business critical programs in various languages
I’m not sure how to explain Kubernetes to a 10-year-old. Yet when I’m allowed to expand to older people who are not technology savvy I can come up with an example which might resonate. It will inside my company: I will use the analogy of our call center. My company services some 2 million people, we manage their pensions and the necessary administration. Every year we send out the latest status of the pensions to the participants, and sure enough people will follow up. Many follow up online – the pension fund websites – yet there is a significant number who call or send an e-mail. We measure the amount of outstanding messages, as well as the amount of unanswered calls (I recall the service level is at 80% answered within 10 seconds). These are displayed on monitors so those who work in the(more)
Assuming a basic understanding of Docker and containers, I’ll describe the Kubernetes specifics. This is from a general user point of view. Kubelet: A process which runs on each node in the cluster. Kubelet talks to the master server and gets a list of containers to run and then runs, manages, and reports container status back to the master server. Pod: The primary unit of Kubernetes scheduling and management. A Pod is list of containers that are always run together on one node. The containers in a pod share an IP address and a network stack, but are otherwise isolated from each other. Container: A Docker container, it has an isolated process space, can expose ports, can define environment variables and a run command. Read more ….
Kubernetes has a strong feature set for microservice architectures. Things like service discovery, automatic failover, rescheduling, and support for overlay networks make it the best choice in dynamic environments with many small, frequently changing applications tied together. If your application needs to start hundreds of containers quickly and will terminate them just as quickly, then Kubernetes is a good option. The converse of this is that it is not as well designed for more static, highly efficient workloads. Containerization is great for flexibility, but doesn’t come for free. There is a performance penalty for using it, somewhere between a few to high single digit percentage penalty, depending on the type of operations. Read more ….
Google Cloud Latest News
DEVELOPER’S CHEAT SHEET
Created by the Google Developer Relations Team
Maintained at https://4words.dev
Cloud Run: Serverless for containerized applications
Cloud Functions: Event-driven serverless functions
Compute Engine: VMs, GPUs, TPUs, Disks
Kubernetes Engine (GKE): Managed Kubernetes/containers
App Engine: Managed app platform
Bare Metal Solution: Hardware for specialized workloads
Preemptible VMs: Short-lived compute instances
Shielded VMs: Hardened VMs
Sole-tenant nodes: Dedicated physical servers
Cloud Filestore: Managed NFS server
Cloud Storage: Multi-class multi-region object storage
Persistent Disk: Block storage for VMs
Local SSD VM: locally attached SSDs
Cloud Bigtable: Petabyte-scale, low-latency, non-relational
Cloud Firestore: Serverless NoSQL document DB
Cloud Memorystore: Managed Redis and Memcached
Cloud Spanner: Horizontally scalable relational DB
Cloud SQL: Managed MySQL,PostgreSQL,SQL Server
DATA AND ANALYTICS
BigQuery: Data warehouse/analytics
BigQuery BI Engine: In-memory analytics engine
BigQuery ML: BigQuery model training/serving
Cloud Composer: Managed workflow orchestration service
Cloud Data Fusion: Graphically manage data pipelines
Cloud Dataflow: Stream/batch data processing
Cloud Dataprep: Visual data wrangling
Cloud Dataproc: Managed Spark and Hadoop
Carrier Peering: Peer through a carrier
Direct Peering: Peer with GCP
Dedicated Interconnect: Dedicated private network connection
Partner Interconnect: Connect on-prem network to VPC
Cloud Armor: DDoS protection and WAF
Cloud CDN: Content delivery network
Cloud DNS: Programmable DNS serving
Cloud Load Balancing: Multi-region load distribution/balancing
Cloud NAT: Network address translation service
Cloud Router: VPC/on-prem network route exchange (BGP)
Cloud VPN (HA): VPN (Virtual private network connection)
Network Service Tiers: Price vs performance tiering
Network Telemetry: Network telemetry service
Traffic Director: Service mesh traffic management
Google Cloud Service Mesh: Service-aware network management
Virtual Private Cloud: Software defined networking
VPC Service Controls: Security perimeters for API-based services
Network Intelligence Center: Network monitoring and topology
Google Cloud Game Servers: Orchestrate Agones clusters
INTERNET OF THINGS (IOT)
Cloud IoT Core: Manage devices, ingest data
IDENTITY AND SECURITY
Access Transparency: Audit cloud provider access
Binary Authorization: Kubernetes deploy-time security
Cloud Audit Logs: Audit trails for GCP
Cloud Data Loss Prevention API: Classify and redact sensitive data
Cloud HSM: Hardware security module service
Cloud EKM: External keys you control
Cloud IAM: Resource access control
Cloud Identity: Manage users, devices & apps
Cloud Identity-Aware Proxy: Identity-based app access
Cloud KMS: Hosted key management service
Cloud Resource Manager: Cloud project metadata management
Cloud Security Command Center: Security management & data risk platform
Cloud Security Scanner: App engine security scanner
Context-aware Access: End-user attribute-based access control
Event Threat Detection: Scans for suspicious activity
Managed Service for Microsoft: Active Directory Managed Microsoft Active Directory
Secret Manager: Store and manage secrets
Security Key Enforcement: Two-step key verification
Shielded VMs: Hardened VMs
Titan Security Key: Two-factor authentication (2FA) device
VPC Service Controls: VPC data constraints
API PLATFORM AND ECOSYSTEMS
API Analytics: API metrics
API Monetization: Monetize APIs
Apigee API Platform: Develop, secure, monitor APIs
Apigee Hybrid: Manage hybrid/multi-cloud API environments
Apigee Sense: API protection from attacks
Cloud Endpoints: Cloud API gateway
Cloud Healthcare API: Healthcare system GCP interoperability
Developer Portal API: management portal
GCP Marketplace: Partner & open source marketplace
GOOGLE MAPS PLATFORM
Directions API: Get directions between locations
Distance Matrix API: Multi-origin/destination travel times
Geocoding API: Convert address to/from coordinates
Geolocation API: Derive location without GPS
Maps Embed API: Display iframe embedded maps
Maps SDK for Android: Maps for Android apps
Maps SDK for iOS: Maps for iOS apps
Maps Static API: Display static map images
Maps SDK for Unity: Unity SDK for games
Maps URLs: URL scheme for maps
Places API: Rest-based Places features
Places Library, Maps JS API: Places features for web
Places SDK for Android: Places features for Android
Places SDK for iOS: Places feature for iOS
Roads API: Convert coordinates to roads
Street View Static API: Static street view images
Time Zone API: Convert coordinates to timezone
G SUITE (WORKSPACE) PLATFORM
Admin SDK: Manage G Suite resources
AMP for Email: Dynamic interactive email
Apps Script: Extend and automate everything
Calendar API: Create and manage calendars
Classroom API: Provision and manage classrooms
Cloud Search: Unified search for enterprise
Docs API: Create and edit documents
Drive Activity API: Retrieve Google Drive activity
Drive API: Read and write files
Drive Picker: Drive file selection widget
Email Markup: Interactive email using schema.org
G Suite Add-ons: Extend G Suite apps
G Suite Marketplace: Storefront for integrated applications
Gmail API: Enhance Gmail
Hangouts Chat Bots: Conversational bots in chat
People API: Manage user’s Contacts
Sheets API: Read and write spreadsheets
Slides API: Create and edit presentations
Task API: Search, read & update Tasks
Vault API: Manage your organization’s eDiscovery
Cloud Firestore: Document store and sync
Cloud Functions for Firebase: Event-driven serverless applications
Cloud Storage for Firebase: Object storage and serving
Crashlytics: Crash reporting and analytics
Firebase A/B Testing: Create A/B test experiments
Firebase App Distribution: Trusted tester early access
Firebase Authentication: Drop-in authentication
Firebase Cloud Messaging: Send device notifications
Firebase Dynamic Links: Link to app content
Firebase Extensions: Pre-packaged development solutions
Firebase Hosting: Web hosting with CDN/SSL
Firebase In-App Messaging: Send in-app contextual messages
Firebase Performance Monitoring: App/web performance monitoring
Firebase Predictions: Predict user targeting
Firebase Realtime Database: Real-time data synchronization
Firebase Remote Config: Remotely configure installed apps
Firebase Test: Lab Mobile testing device farm
Google Analytics for Firebase: Mobile app analytics
ML Kit for Firebase: ML APIs for mobile
Google Cloud Home Page: cloud.google.com
Google Cloud Blog: cloud.google.com/blog
Google Cloud Open Source: opensource.google/projects/list/cloud
GCP Medium Publication: medium.com/google-cloud
Apigee Blog: apigee.com/about/blog
Firebase Blog: firebase.googleblog.com
G Suite Developers Blog: gsuite-developers.googleblog.com
G Suite GitHub: github.com/gsuitedevs
G Suite Twitter: twitter.com/gsuitedevs
Google Cloud Certifications: cloud.google.com/certification
Google Cloud System Status: status.cloud.google.com
Google Cloud Training: cloud.google.com/training
Google Developers Blog: developers.googleblog.com
Google Maps Platform Blog: mapsplatform.googleblog.com
Google Open Source Blog: opensource.googleblog.com
Google Security Blog: security.googleblog.com
Kaggle Home Page: www.kaggle.com
Kubernetes Blog kubernetes.io/blog
Regions and Network Map: cloud.google.com/about/locations
Cloud APIs: APIs for cloud services
Cloud Billing API: Programmatically manage GCP billing
Cloud Billing: Billing and cost management tools
Cloud Console: Web-based management console
Cloud Deployment Manager: Templated infrastructure deployment
Cloud Mobile App: iOS/Android GCP manager app
Private Catalog: Internal Solutions Catalog
Cloud Debugger: Live production debugging
Error Reporting: App error reporting
Cloud Logging: Centralized logging
Cloud Monitoring: Infrastructure and application monitoring
Cloud Profiler: CPU and heap profiling
Cloud Trace: App performance insights
Transparent SLIs: Monitor GCP services
Cloud Build: Continuous integration/delivery platform
Cloud Code for IntelliJ: IntelliJ GCP tools
Cloud Code for VS Code: VS Code GCP tools
Cloud Code: Cloud native IDE extensions
Cloud Scheduler: Managed cron job service
Cloud SDK: CLI for GCP
Cloud Shell: Browser-based terminal/CLI
Cloud Source Repositories: Hosted private git repos
Cloud Tasks: Asynchronous task execution
Cloud Tools for Eclipse: Eclipse GCP tools
Cloud Tools for Visual Studio: Visual Studio GCP tools
Container Analysis: Automated security scanning
Container Registry: Private container registry/storage
Artifact Registry: Universal package manager
Gradle App Engine: Plugin Gradle App Engine plugin
Maven App Engine Plugin: Maven App Engine plugin
MIGRATION TO GCP
BigQuery Data Transfer: Service Bulk import analytics data
Cloud Data Transfer: Data migration tools/CLI
Google Transfer Appliance: Rentable data transport box
Migrate for Anthos: Migrate VMs to GKE containers
Migrate for Compute Engine: Compute Engine migration tools
Migrate from Amazon Redshift: Migrate from Redshift to BigQuery
Migrate from Teradata: Migrate from Teradata to BigQuery
Storage Transfer Service: Online/on-premises data transfer
VM Migration: VM migration tools
Cloud Foundation Toolkit: Infrastructure as Code templates
Top-paying Cloud certifications:
- Google Certified Professional Cloud Architect — $175,761/year
- AWS Certified Solutions Architect – Associate — $149,446/year
- Azure/Microsoft Cloud Solution Architect – $141,748/yr
- Google Cloud Associate Engineer – $145,769/yr
- AWS Certified Cloud Practitioner — $131,465/year
- Microsoft Certified: Azure Fundamentals — $126,653/year
- Microsoft Certified: Azure Administrator Associate — $125,993/year
Answer these questions to validate your basic knowledge of GCP:
As a prerequisite, here are the top 20 questions will help you familiarize yourself with the Google Cloud Platform.
1) What is GCP?
2) What are the benefits of using GCP?
3) How can GCP help my business?
4) What are some of the features of GCP?
5) How is GCP different from other clouds?
6) Why should I use GCP?
7) What are some of GCP’s strengths?
8) How is GCP priced?
9) Is GCP easy to use?
10) Can I use GCP for my personal projects?
11) What services does GCP offer?
12) What can I do with GCP?
13) What languages does GCP support?
14) What platforms does GCP support?
15) Does GPC support hybrid deployments?
16) Does GPC support on-premises deployments?
17) Is there a free tier on GPC ? 18) How do I get started with using GCP
Sources:A Twitter List by enoumen