Download the Ace AWS DEA-C01 Exam App: iOS - Android
Google Workspace – Docs – Drive – Sheets – Slides – Forms – How To
Top 10 Google Workspace Tips and Tricks
Use keyboard shortcuts: Google Workspace has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action in Google Docs, or “Ctrl + Shift + V” to paste text without formatting.
Collaborate in real-time: With Google Workspace, you can work on documents and spreadsheets with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
Create and edit documents offline: With the Google Docs offline extension, you can create and edit documents even when you don’t have an internet connection. Once you’re back online, your changes will be automatically saved.
Use Google Keep for notes and to-do lists: Google Keep is a simple note-taking app that integrates seamlessly with Google Workspace. You can use it to take notes, create to-do lists, and set reminders.
Use the Explore feature in Google Docs: The Explore feature in Google Docs can help you research and write documents more quickly by suggesting relevant information, images, and citations.
Automate tasks with Google Scripts: Google Scripts is a powerful scripting tool that you can use to automate tasks in Google Workspace. For example, you can use a script to automatically send an email when a new form is submitted or to create a calendar event from a Google Sheets spreadsheet.
Use Google Forms for surveys and quizzes: Google Forms is a great tool for creating surveys, quizzes, and other forms. You can use it to collect information from people and analyze the results in Google Sheets.
Take advantage of the Google Workspace Marketplace: The Google Workspace Marketplace is a collection of apps and add-ons that can help you customize and enhance your Google Workspace experience. You can find apps for a wide range of tasks, such as creating diagrams, signing documents electronically, and more.
Use Google Slides for presentations: Google Slides is an online presentation tool that can be used to create professional-looking slideshows. You can collaborate with others in real-time, add animations and transitions, and even insert videos.
Use Google Drive for file storage and sharing: Google Drive is the main storage service for all your files, including documents, images, videos, and more. You can share files and folders with others, collaborate in real-time, and access your files from anywhere.
These are some of the most useful tips and tricks for getting the most out of Google Workspace. The apps are constantly updated and many new features are added regularly.
Use the “Quick Access” feature: Google Drive’s Quick Access feature uses machine learning to predict which files you might need next, and it surfaces them at the top of your Google Drive for easy access.
Take advantage of the offline feature: With the Google Drive Offline extension, you can access and edit your files even when you don’t have an internet connection.
Create shortcuts to frequently used files: You can create a shortcut to a file or folder by right-clicking on it and selecting “Add to My Drive.” This way, you can quickly access it from your Google Drive home screen.
Use the “Take a Snapshot” feature: Google Drive has a built-in “Take a Snapshot” feature that allows you to take a screenshot of any webpage and save it directly to your Google Drive.
Use the “Suggested Sharing” feature: Google Drive’s “Suggested Sharing” feature uses machine learning to predict which people you might want to share a file with, and it automatically suggests their email addresses to you.
Search for files using specific keywords: You can use advanced search operators to search for files that contain specific keywords or were created by a certain person.
Use the “File Stream” feature: With the Google File Stream feature, you can access all of your Google Drive files directly from your computer’s file explorer, without having to download them first.
Use the “Add-ons” feature: You can use Google Drive’s Add-ons feature to add extra functionality to your Google Drive, such as the ability to sign PDFs, send emails directly from Google Drive, and more.
Use the “Activity” feature: Google Drive’s “Activity” feature allows you to see who has accessed a file, when they accessed it, and what changes they made.
Use the “Google Backup and Sync” app: Google Backup and Sync is a handy app that allows you to automatically back up specific folders from your computer to your Google Drive. This way, you can be sure that your important files are always safe and accessible.
These are some of the most useful tips and tricks for getting the most out of Google Drive. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your files.
How do you insert an image into a slide on Google Drive?
To insert an image into a slide on Google Drive, you can use the Google Slides app. Here are the steps:
Open Google Slides in your browser and navigate to the presentation you want to add the image to.
Select the slide you want to add the image to.
Click on the “Insert” menu at the top of the screen.
Select “Image” from the drop-down menu.
Choose the option to “Upload” an image, then select the image you want to insert from your computer.
You can also select “By URL” if you have image link and paste the image link.
Drag the image around the slide to reposition it or use the handles to resize it.
Once you have the image positioned and sized the way you want, you can add text or other elements to the slide as needed.
Alternatively, you can also Drag and drop an image directly from your computer to your slide.
It should be noted that the slide should be in edit mode, otherwise you will not be able to insert image.
How can you rotate an image in Google Drive without having to download it first?
You can rotate an image in Google Drive without downloading it by using the “Preview” feature. To do this, follow these steps:
Open Google Drive and navigate to the folder containing the image you want to rotate.
Click on the image to open it in the “Preview” mode.
Click on the “Tools” button in the top-right corner of the screen.
Click on “Rotate” from the menu that appears.
Select the desired rotation angle.
Click on the “Save” button to save the changes to the image.
Alternatively, if you want to rotate multiple images at once, you can select the images you want to rotate and then right click on the selected files and select rotate or use the key board shortcuts Ctrl+Shift+T
You can also use many other editing tools within the preview itself to edit images.
Can you create documents directly from Google Drive?
Yes, it is possible to create documents directly from Google Drive. Google Drive is a cloud-based storage service provided by Google that allows users to store, share, and access files from any device. It also includes a suite of productivity tools, including Google Docs, Google Sheets, and Google Slides, that allow users to create, edit, and collaborate on documents, spreadsheets, and presentations, respectively.
To create a new document in Google Drive, you can follow these steps:
Open Google Drive by going to drive.google.com or by opening the Google Drive app on your device.
Click on the “+ New” button on the top left corner of the screen.
Select “Google Docs”, “Google Sheets” or “Google Slides” from the drop-down menu.
A new document will be created and will open in a new tab.
You can also create a new document by right-clicking on the Google Drive window and selecting “New” from the context menu. The new document will be saved to your Google Drive and can be accessed, edited, and shared with others. You can also upload existing documents to Google Drive and convert them to Google Docs, Sheets or Slides format to edit them collaboratively.
Do you need a Google Drive account to view files that are shared with you?
No, you do not need a Google Drive account to view files that are shared with you. If someone shares a file with you on Google Drive, they can give you access to it by sending you a link to the file, or by adding you as a collaborator. When you click on the link, you can view the file in your browser without having to sign in to a Google account.
However, if the file is shared with you as “view-only” and the owner of the file has enabled the option of “Restrict editing” you will only be able to view the file and cannot download, print or copy it. If you want to have the full access of the file and also want to collaborate on it, you will need to sign in to a Google account or create a new one.
It is important to note that the shared link may be password protected, or the link may expire after a certain period of time. Additionally, if the person sharing the file has enabled access restrictions, such as only allowing certain people or certain domains to access the file, you may not be able to view the file if you do not meet those requirements.
Top 10 Google Docs Tips and Tricks
Use keyboard shortcuts: Google Docs has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action, or “Ctrl + Shift + V” to paste text without formatting.
Collaborate in real-time: With Google Docs, you can work on documents with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
Use the “Explore” feature: The Explore feature in Google Docs can help you research and write documents more quickly by suggesting relevant information, images, and citations.
Use the “Research” feature: You can use the “Research” feature in Google Docs to find and insert quotes or information from external sources directly into your document.
Use templates: Google Docs has a wide variety of templates available for different types of documents, such as resumes, letters, and more. These templates can help you get started quickly and ensure a professional look for your document.
Use the “Voice Typing” feature: Google Docs has a built-in “Voice Typing” feature that allows you to dictate text into your document using your voice. This can be a great way to write more quickly, or to transcribe an audio recording.
Use the “Add-ons” feature: You can use Google Docs’ Add-ons feature to add extra functionality to your documents, such as the ability to sign PDFs, create diagrams, and more.
Use the “Commenting” feature: The commenting feature in Google Docs allows you to leave feedback or suggestions directly on a document, making it easy for others to see and respond to your comments.
Use the “Track Changes” feature: The “Track Changes” feature in Google Docs allows multiple people to collaborate on a document and see each other’s changes and suggestions, but still keep the original document intact.
Use the “Headings” feature: Using headings in Google Docs can help structure and organize your documents, making them more readable and easier to navigate. You can format text as headings, then use the “Table of Contents” feature to create a table of contents for the document based on the headings.
These are some of the most useful tips and tricks for getting the most out of Google Docs. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your writing and editing process.
How do we upload a large file on Google Docs? I am trying to upload a 316 page file but only 63 pages are uploading.
There are a few things you can try to upload a large file on Google Docs:
Use the Google Drive app: Google Drive app allows you to upload files up to 5 TB in size. You can download it from the Google Play or App Store and then use it to upload your large file.
Zip the file: Compress your file into a .zip or .rar file and then upload it to Google Drive. Once the file is uploaded, you can unzip it and open it in Google Docs.
Convert the file: If the file is in a format that is not compatible with Google Docs, convert it to a compatible format (such as .docx or .pdf) and then upload it.
Split the file: If you are unable to upload the file in one go, you can split it into smaller parts and upload them separately. Once all the parts are uploaded, you can merge them in Google Docs.
Check your internet connection: A weak internet connection can cause issues with uploading large files. Ensure that you are connected to a stable and fast internet connection.
Try using Google Chrome browser: Some users have reported that using Chrome browser instead of other browsers such as Firefox or Safari can help with uploading large files.
It’s worth noting that Google Docs has a maximum file size of 5 TB and if your file exceeds that size, you will not be able to upload it. In addition, make sure to also check if you have enough storage available in your Google Drive account.
Top 10 Google Slides Tips and Tricks
Use keyboard shortcuts: Google Slides has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action, or “Ctrl + Shift + V” to paste text without formatting.
Collaborate in real-time: With Google Slides, you can work on presentations with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
Use the “Explore” feature: The Explore feature in Google Slides can help you research and write your presentation more quickly by suggesting relevant information, images, and citations.
Use templates: Google Slides has a wide variety of templates available for different types of presentations, such as business, education, and more. These templates can help you get started quickly and ensure a professional look for your presentation.
Use the “Add-ons” feature: You can use Google Slides’ Add-ons feature to add extra functionality to your presentations, such as the ability to create charts, diagrams, and more.
Use the “Master” feature: The Master feature in Google Slides allows you to create a template slide, with a specific layout and design, that can be reused across multiple slides in the same presentation, making it easy to maintain consistency.
Use the “Speaker Notes” feature: The “Speaker Notes” feature in Google Slides allows you to write notes for yourself about what you want to say for each slide, which can be helpful when giving a presentation.
Use the “Animations” feature: Google Slides allows you to add animations to elements on your slide, to make your presentation more dynamic and engaging.
Use the “Transitions” feature: The Transitions feature in Google Slides allows you to add effects between slides, such as fade, dissolve, and more, giving your presentation a polished look.
Use the “Presenter View” feature: The “Presenter View” feature in Google Slides allows you to see the current slide, the next slide, your speaker notes, and a timer while presenting, so you can stay on track and keep your audience engaged.
These are some of the most useful tips and tricks for getting the most out of Google Slides. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your presentation-making process.
Top 10 Google Forms Tips and Tricks
Here are ten tips and tricks for using Google Forms:
Use “Go to section based on answer”to create a branching form, where the questions a respondent sees are based on their previous answers.
Use the “Required” option to ensure that respondents complete certain questions before submitting the form.
Use the “Data validation” option to ensure that respondents enter certain types of information, such as a valid email address or a number within a certain range.
Use the “Randomize order of questions” option to randomize the order of questions for each respondent, which can help prevent bias in your data.
Use the “Limit to one response” option to ensure that each respondent can only submit the form once.
Use the “Add collaboration” option to share the form with others and collaborate on it in real time.
Use the “Schedule Form” option to automatically close your form on a specific date and time, or after a certain number of responses have been received.
Use the “Autocomplete” option to make it easier for respondents to enter frequently used or personal information.
Use the “File upload” option to collect files and documents from respondents, such as images or PDFs.
Use the “Create a quiz” option to create a multiple-choice or checkbox quiz, and then use the “Grade” option to automatically grade the quiz and provide feedback to respondents.
Is there a way to find out the number of respondents in Google Forms without opening each respondent’s response?
Yes, there is a way to find out the number of respondents in Google Forms without opening each respondent’s response. You can view the summary of responses in the “Responses” tab of the Google Form. The summary will show the number of responses received, as well as the option to view the responses in a spreadsheet format. You can also filter the responses based on various criteria and download them to your computer. Additionally, you can use Google Forms add-ons such as “Form Publisher” or “FormMule” which allows you to send the responses to a Google Sheets or Excel, and then use the spreadsheet functions to analyse the data.
Top 10 Google Sheets Tips and Tricks
Here are ten tips and tricks for using Google Sheets:
Use keyboard shortcuts to quickly navigate and perform common actions, such as ctrl+c to copy, ctrl+v to paste, and ctrl+z to undo.
Use the “=QUERY” function to quickly filter and sort large data sets, similar to a SQL query.
Use the “=IMPORTXML” function to import structured data from websites, such as stock prices or weather data.
Use the “=IMPORTRANGE” function to import data from other sheets, such as data from a master sheet that is shared with multiple team members.
Use the “=IF” function to perform basic calculations, such as calculating sales tax or commission.
Use the “=SUMIF” and “=COUNTIF” functions to perform mathematical operations based on a certain condition, such as summing all numbers in a range that are greater than a certain value.
Use the “=Vlookup” function to lookup and retrieve data from other sheets or even other documents.
Use the “=HLOOKUP” function to do a horizontal lookup.
Use the “Data validation” option to ensure that data entered in a certain range of cells meets certain conditions, such as being a whole number or a date within a certain range.
Use the “Conditional formatting” option to format cells based on their contents, such as making all negative numbers red, or highlighting cells that contain a certain keyword.
How can I import LinkedIn searches into Google Sheets?
There are a few different ways to import LinkedIn searches into Google Sheets:
Use a LinkedIn scraper tool: There are a number of LinkedIn scraper tools available online that can be used to scrape data from LinkedIn searches and export it to Google Sheets. Some popular options include Hunter.io, Skrapp.io, and LeadLeaper.
Use a LinkedIn API: LinkedIn offers an API that allows developers to access data from LinkedIn searches. You can use this API to extract data from LinkedIn searches and import it into Google Sheets using a script or a tool like Import.io
Use Google Sheets Add-ons: There are several add-ons available for Google Sheets that allow you to import data from LinkedIn searches. Some popular options include Hunter, LinkedIn Sales Navigator, and LinkedIn Lead Gen Forms.
Use a manual copy-paste method: You can also use a manual copy-paste method to import LinkedIn searches into Google Sheets. You can perform a search on LinkedIn, go through the results, and copy-paste the data you want into a Google Sheet.
Please note that some of these methods may require a LinkedIn premium account or may have limitations on the amount of data that can be scraped. Also, some scraping methods may violate LinkedIn terms of service.
Top 10 Google Search Tips and Tricks
What are top 10 Google Search Tips and Tricks that very few people know about?
Use quotation marks to search for an exact phrase: If you want to search for a specific phrase, enclose it in quotation marks. Example: “Google Search Tips and Tricks”
Use the minus sign to exclude specific words: If you want to exclude specific words from your search, use the minus sign (-) before the word you want to exclude. Example: Google Search Tips and Tricks – few
Use the site: operator to search within a specific website: If you want to search for something within a specific website, use the site: operator followed by the website’s URL. Example: Google Search Tips site:www.example.com
Use the filetype: operator to find specific file types: If you want to find a specific file type, use the filetype: operator followed by the file extension. Example: Google Search Tips filetype:pdf
Use the related: operator to find related websites: If you want to find websites related to a specific website, use the related: operator followed by the website’s URL. Example: related:www.example.com
Use the define: operator to find definitions: If you want to find the definition of a word, use the define: operator followed by the word. Example: define:Google
Use the link: operator to find websites that link to a specific website: If you want to find websites that link to a specific website, use the link: operator followed by the website’s URL. Example: link:www.example.com
Use the cache: operator to view a website’s cached version: If you want to view a website’s cached version, use the cache: operator followed by the website’s URL. Example: cache:www.example.com
Use the intext: operator to search for specific words within a webpage: If you want to search for specific words within a webpage, use the intext: operator followed by the word. Example: intext:Google Search Tips
Use the inurl: operator to search for specific words within a URL: If you want to search for specific words within a URL, use the inurl: operator followed by the word. Example: inurl:Google Search Tips
These are just a few of the many advanced search techniques that can be used on Google, and can help you find more specific and relevant results. Keep in mind that Google’s search algorithm is constantly evolving so some of the tips may not work as expected, but they’re still worth trying.
What challenges remain in advancing the safety and privacy features of Google Images?
There are several challenges that remain in advancing the safety and privacy features of Google Images:
Identifying and removing inappropriate content: Identifying and removing inappropriate content, such as child sexual abuse material, remains a major challenge for Google Images. Despite the use of machine learning algorithms and human moderators, it can be difficult to accurately identify and remove all inappropriate content.
Protecting personal privacy: Protecting the privacy of individuals whose images appear on Google Images is also a challenge. Google has implemented features such as “SafeSearch” to help users filter out explicit content, but there remains a risk that sensitive personal information could be exposed through reverse image searches.
Dealing with misinformation: Google Images is also facing challenges in dealing with misinformation, as false or misleading information can be spread through images.
Balancing user’s rights with copyright infringement: Balancing the rights of users to access and share information with the rights of copyright holders to protect their work is a challenging issue for Google Images. Google has implemented a copyright removal process, but it can be difficult to effectively enforce copyright infringement on a large scale.
Addressing the issue of deepfakes: With the advent of deepfakes, images can be manipulated and deepfake images can be difficult to detect, this is a new challenge for Google Images to address.
Addressing the needs of visually impaired: Making sure that the images in Google Images are accessible to the visually impaired is another important challenge for Google.
Google continue to invest in developing technology and policies to address these challenges and to ensure the safety and privacy of users on their platform. However, given the scale and complexity of these issues, it is likely that challenges will continue to arise in the future.
AWS Azure Google Cloud Certifications Testimonials and Dumps
Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.
85% of hiring managers say cloud certifications make a candidate more attractive.
Build the skills that’ll drive your career into six figures.
In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.
Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.
Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.
Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.
Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.
Just Passed My CCP exam today (within 2 weeks)
I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.
Resources:
-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)
-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)
-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)
I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.
What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀
Passed with two weeks of prep (after work and weekends)
This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.
Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**
These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.
Pass AWS CCP. The score is beyond expected
I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier. Just a little bit of sharing, now I’ll find something to continue ^^
– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.
– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.
– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.
– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.
– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**
– One or two free practice exams found by a quick Google search
*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.
**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.
Notes
– My study regimen (i.e., an hour to two every day for three weeks) was overkill.
– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.
– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.
What would I do differently?
– Focus on practice tests only. No video lectures.
– Focus on the technologies domain. You can intuit your way through questions in the other domains.
Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.
The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.
It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.
I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.
Passed SAA-C03 – Feedback
Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.
I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).
Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.
Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.
First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.
SAA-C03 Exam Prep
For my exam prep, I bought the adrian cantrill video course, tutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.
TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.
For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.
The Actual SAA-C03 Exam
The actual AWS exam is almost the same with the ones in the TD tests where:
All of the questions are scenario-based
There are two (or more) valid solutions in the question, e.g:
Need SSL: options are ACM and self-signed URL
Need to store DB credentials: options are SSM Parameter Store and Secrets Manager
The scenarios are long-winded and asks for:
MOST Operationally efficient solution
MOST cost-effective
LEAST amount overhead
Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.
Another Passed SAA-C03?
Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.
Background:
– graduate with networking background
– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.
– cloud experience, short period like 3-6 months with practice
– provisioned cloud application using terraform in azure and aws
cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more
tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.
udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.
lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal
Advice:
you need to know some general exam topics like how to:
– s3 private access
– ec2 availability
– kinesis product including firehose, data stream, blabla
– iam
My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.
Good luck anyone!
Passed SAA
I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.
I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.
My company is offering stipend’s for each certification, so I’m going straight to developer next.
Recently passed SAA-C03
Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.
I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.
Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:
* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.
* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.
* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.
Not sure if they are graded or Amazon testing out new parts.
Cheers.
Another SAP-C01-Pass
Received my notification this morning that I passed 811.
Prep Time: 10 weeks 2hrs a day
Materials: Neil Davis videos/practice exam Jon Bonso practice exams White papers Misc YouTube videos Some hands on
Prof Experience: 4 years AWS using main services as architect
AWS Certs: CCP-SSA-DVA-SAP(now)
Thoughts: Exam was way more familiar to me than the Developer Exam. I use very little AWS developer tools but mainly use core AWS services. Neil’s videos were very straightforward, easy to digest, and on point. I was able to watch most of the videos on a plane flight to Vegas.
After video series I started to hit his section based exams, main exam, notes, and followed up with some hands on. I was getting destroyed on some of the exams early on and had to rewatch and research the topics, writing notes. There is a lot of nuance and fine details on the topics, you’ll see this when you take the practice exam. These little details matter.
Bonso’s exam were nothing less than awesome as per usual. Same difficulty and quality as Neil Davis. Followed the same routine with section based followed by final exam. I believe Neil said to aim for 80’s on his final exams to sit for the exam. I’d agree because that’s where I was hitting a week before the exam (mid 80’s). Both Neil and Jon exams were on par with exam difficulty if not a shade more difficult.
The exam itself was very straightforward. My experience is the questions were not overly verbose and were straight to the point as compared to the practice exams I took. I was able to quickly narrow down the questions and make a selection. Flagged 8 questions along the way and had 30min to review all my answers. Unlike some people, I didn’t feel like it was a brain melter and actually enjoyed the challenge. Maybe I’m a sadist who knows.
Advice: Follow Neil’s plan, bone up on weak areas and be confident. These questions have a pattern based upon the domain. Doing the practice exams enough will allow you to see the pattern and then research will confirm your suspicions. You can pass this exam!
Passed the certified developer associate this week.
Primary study was Stephane Maarek’s course on Udemy.
I also used the Practice Exams by Stephane Maarek and Abhishek Singh.
I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.
The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.
Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.
I cleared Developer associate exam yesterday. I scored 873. Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool) 3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.
Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.
Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.
Another Passed Associate Developer Exam (DVA-C01)
Already had passed the Associate Architect Exam (SA-C03) 3 months ago, so I got much more relaxed to the exam, I did the exam with Pearson Vue at home with no problems. Used Adrian Cantrill for the course together with the TD exams.
Studied 2 weeks a 1-2 hours since there is a big overlap with the associate architect couse, even tho the exam has a different approach, more focused on the Serverless side of AWS. Lots of DynamoDB, Lambda, API Gateway, KMS, CloudFormation, SAM, SSO, Cognito (User Pool and Identity Pool), and IAM role/credentials best practices.
I do think in terms of difficulty it was a bit easier than the Associate Architect, maybe it is made up on my mind as it was my second exam so I went in a bit more relaxed.
Next step is going for the Associate Sys-Ops, I will use Adrian Cantrill and Stephane Mareek courses as it is been said that its the most difficult associate exam.
Passed the SCS-C01 Security Specialty
Mixture of Tutorial Dojo practice exams, A Cloud Guru course, Neal Davis course & exams helped a lot. Some unexpected questions caught me off guard but with educated guessing, due to the material I studied I was able to overcome them. It’s important to understand:
KMS Keys
AWS Owned Keys
AWS Managed KMS keys
Customer Managed Keys
asymmetrical
symmetrical
Imported key material
What services can use AWS Managed Keys
KMS Rotation Policies
Depending on the key matters the rotation that can be applied (if possible)
Key Policies
Grants (temporary access)
Cross-account grants
Permanent Policys
How permissions are distributed depending on the assigned principle
IAM Policy format
Principles (supported principles)
Conditions
Actions
Allow to a service (ARN or public AWS URL)
Roles
Secrets Management
Credential Rotation
Secure String types
Parameter Store
AWS Secrets Manager
Route 53
DNSSEC
DNS Logging
Network
AWS Network Firewall
AWS WAF (some questions try to trick you into thinking AWS Shield is needed instead)
AWS Shield
Security Groups (Stateful)
NACL (Stateless)
Ephemeral Ports
VPC FlowLogs
AWS Config
Rules
Remediation (custom or AWS managed)
AWS CloudTrail
AWS Organization Trails
Multi-Region Trails
Centralized S3 Bucket for multi-account log aggregation
AWS GuardDuty vs AWS Macie vs AWS Inspector vs AWS Detective vs AWS Security Hub
It gets more in depth, I’m willing to help anyone out that has questions. If you don’t mind joining my Discord to discuss amongst others to help each other out will be great. A study group community. Thanks. I had to repost because of a typo 🙁
Exam guide book by Kam Agahian and group of authors – this just got released and has all you need in a concise manual, it also included 3 practice exams, this is a must buy for future reference and covers ALL current exam topics including container networking, SD-WAN etc.
Stephane Maarek’s Udemy course – it is mostly up-to-date with the main exam topics including TGW, network firewall etc. To the point lectures with lots of hands-on demos which gives you just what you need, highly recommended as well!
Tutorial Dojos practice tests to drive it home – this helped me get an idea of the question wording, so I could train myself to read fast, pick out key words, compare similar answers and build confidence in my knowledge.
Crammed daily for 4 weeks (after work, I have a full time job + family) and went in and nailed it. I do have networking background (15+ years) and I am currently working as a cloud security engineer and I’m working with AWS daily, especially EKS, TGW, GWLB etc.
For those not from a networking background – it would definitely take longer to prep.
What an exciting journey. I think AZ-900 is the hardest probably because it is my first Microsoft certification. Afterwards, the others are fair enough. AI-900 is the easiest.
I generally used Microsoft Virtual Training Day, Cloud Ready Skills, Measureup and John Savill’s videos. Having built a fundamental knowledge of the Cloud, I am planning to do AWS CCP next. Wish me luck!
Passed Azure Fundamentals
Learning Material
Hi all,
I passed my Azure fundamentals exam a couple of days ago, with a score of 900/1000. Been meaning to take the exam for a few months but I kept putting it off for various reasons. The exam was a lot easier than I thought and easier than the official Microsoft practice exams.
Study materials;
A Cloud Guru AZ-900 fundamentals course with practice exams
I am pretty proud of this one. Databases are an area of IT where I haven’t spent a lot of time, and what time I have spent has been with SQL or MySQL with old school relational databases. NoSQL was kinda breaking my brain for a while.
Study Materials:
Microsoft Virtual Training Day, got the voucher for the free exam. I know several people on here said that was enough for them to pass the test, but that most certainly was not enough for me.
Exampro.co DP-900 course and practice test. They include virtual flashcards which I really liked.
Whizlabs.com practice tests. I also used the course to fill in gaps in my testing.
Passed AI-900! Tips & Resources Included!!
Achievement Celebration
Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900.
Here’s the order in which I passed my AWS and Azure certifications:
SAA>DVA>SOA>DOP>SAP>CLF|AZ-900>DP-900>AI-900
I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can.
Here’s my study plan for AZ-900 and DP-900 exams:
finish a popular video course aimed at the cert
watch John Savill’s study/exam cram
take multiple practice exams scoring in 90s
This is what I used for AI-900:
Alan Rodrigues’ video course (includes 2 practice exams) 👌
John Savill’s study cram 💪
practice exams by Scott Duffy and in 28Minutes Official 👍
knowledge checks in AI modules from MS learn docs 🙌
I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes.
Just be clear on the topics explained by the above video and you’ll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam
Just passed AZ-104
Achievement Celebration
I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs.
Received very little of this:
Containers
Storage
Monitoring
I passed with a 710 but a pass is a pass haha.
Used tutorial dojos but the closest questions I found where in the Udemy testing exams.
Regards,
Passed GCP Professional Cloud Architect
First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.
I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.
In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.
As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.
I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.
One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.
I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.
All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.
Passed GCP: Cloud Digital Leader
Hi everyone,
First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience.
Preparation
I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet.
AGU also has three – 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren’t that hard).
I don’t know if someone could pass the test if they just watched the videos on Google Cloud’s certification site, especially if you had no experience with GCP.
Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I’m working on my CCSP. After taking the test, I realized I way over prepared.
Exam Center
It was my first time at this testing center and I wasn’t happy with the experience. A few of the issues I had are:
– My personal items (phone, keys) were placed in an unlocked filing cabinet
– My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space
– The laminated sheet they gave me looked like someone had spilled Kool-Aid on it
– They only offered earplugs, instead of noise cancelling headphones
Exam
My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do.
I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer.
Passed the Google Cloud: Associate Cloud Engineer
Hey all, I was able to pass the Google Cloud: Associate Cloud Engineer exam in 27 days.
I studied about 3-5 hours every single day.
I created this note to share with the resources I used to pass the exam.
Happy studying!
GCP ACE Exam Aced
Hi folks,
I am glad to share with you that I have cleared by GCP ACE exam today and would like to share my preparation with you:
1)I completed these courses from Coursera:
1.1 Google Cloud Platform Fundamentals – Core Infrastructure
1.2 Essential Cloud Infrastructure: Foundation
1.3 Essential Cloud Infrastructure: Core Services
1.4 Elastic Google Cloud Infrastructure: Scaling and Automation
Post these courses, I did couple of QwikLab courses as listed in orderly manner:
2 Getting Started: Create and Manage Cloud Resources (Qwiklabs Quest)
2.1 A Tour of Qwiklabs and Google Cloud
2.2 Creating a Virtual Machine
2.2 Compute Engine: Qwik Start – Windows
2.3 Getting Started with Cloud Shell and gcloud
2.4 Kubernetes Engine: Qwik Start
2.5 Set Up Network and HTTP Load Balancers
2.6 Create and Manage Cloud Resources: Challenge Lab
3 Set up and Configure a Cloud Environment in Google Cloud (Qwiklabs Quest)
3.1 Cloud IAM: Qwik Start
3.2 Introduction to SQL for BigQuery and Cloud SQL
3.3 Multiple VPC Networks
3.4 Cloud Monitoring: Qwik Start
3.5 Deployment Manager – Full Production [ACE]
3.6 Managing Deployments Using Kubernetes Engine
3.7 Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab
4 Kubernetes in Google Cloud (Qwiklabs Quest)
4.1 Introduction to Docker
4.2 Kubernetes Engine: Qwik Start
4.3 Orchestrating the Cloud with Kubernetes
4.4 Managing Deployments Using Kubernetes Engine
4.5 Continuous Delivery with Jenkins in Kubernetes Engine
Post these courses I did the following for mock exam preparation:
Cloud computing has revolutionized the way companies develop applications. Most of the modern applications are now cloud native. Undoubtedly, the cloud offers immense benefits like reduced infrastructure maintenance, increased availability, cost reduction, and many others.
However, which cloud vendor to choose, is a challenge in itself. If we look at the horizon of cloud computing, the three main providers that come to mind are AWS, Azure, and Google cloud. Today, we will compare the top three cloud giants and see how they differ. We will compare their services, specialty, and pros and cons. After reading this article, you will be able to decide which cloud vendor is best suited to your needs and why.
History and establishment
AWS
AWS is the oldest player in the market, operating since 2006. Here’s a brief history of AWS and how computing has changed. Being the first in the cloud industry, it has gained a particular advantage over its competitors. It offers more than 200+ services to its users. Some of its notable clients include:
Netflix
Expedia
Airbnb
Coursera
FDA
Coca Cola
Azure
Azure by Microsoft started in 2010. Although it started four years later than AWS, it is catching up quite fast. Azure is Microsoft’s public cloud platform which is why many companies prefer to use Azure for their Microsoft-based applications. It also offers more than 200 services and products. Some of its prominent clients include:
HP
Asus
Mitsubishi
3M
Starbucks
CDC (Center of Disease Control) USA
National health service (NHS) UK
Google
Google Cloud also started in 2010. Its arsenal of cloud services is relatively smaller compared to AWS or Azure. It offers around 100+ services. However, its services are robust, and many companies embrace Google cloud for its specialty services. Some of its noteworthy clients include:
PayPal
UPS
Toyota
Twitter
Spotify
Unilever
Market share & growth rate
If you look at the market share and growth chart below, you will notice that AWS has been leading for more than four years. Azure is also expanding fast, but it is still has a long way to go to catch up with AWS.
However, in terms of revenue, Azure is ahead of AWS. In Q1 2022, AWS revenue was $18.44 billion; Azure earned $23.4 billion, while Google cloud earned $5.8 billion.
Availability Zones (Data Centers)
When comparing cloud vendors, it is essential to see how many regions and availability zones are offered. Here is a quick comparison between all three cloud vendors in terms of regions and data centers:
AWS
AWS operates in 25 regions and 81 availability zones. It offers 218+ edge locations and 12 regional edge caches as well. You can utilize the edge location and edge caches in services like AWS Cloudfront and global accelerator, etc.
Azure
Azure has 66 regions worldwide and a minimum of three availability zones in each region. It also offers more than 116 edge locations.
Google
Google has a presence in 27 regions and 82 availability zones. It also offers 146 edge locations.
Although all three cloud giants are continuously expanding. Both AWS and Azure offer data centers in China to specifically cater for Chinese consumers. At the same time, Azure seems to have broader coverage than its competitors.
Comparison of common cloud services
Let’s look at the standard cloud services offered by these vendors.
Compute
Amazon’s primary compute offering is EC2 instances, which are very easy to operate. Amazon also provides a low-cost option called “Amazon lightsail” which is a perfect fit for those who are new to computing and have a limited budget. AWS charges for EC2 instances only when you are using them. Azure’s compute offering is also based on virtual machines. Google is no different and offers virtual machines in Google’s data centers. Here’s a brief comparison of compute offerings of all three vendors:
Storage
All three vendors offer various forms of storage, including object-based storage, cold storage, file-based storage, and block-based storage. Here’s a brief comparison of all three:
Database
All three vendors support managed services for databases. They also offer NoSQL as well as document-based databases. AWS also provides a proprietary RDBMS named “Aurora”, a highly scalable and fast database offering for both MySQL and PostGreSQL. Here’s a brief comparison of all three vendors:
Comparison of Specialized services
All three major cloud providers are competing with each other in the latest technologies. Some notable areas of competition include ML/AI, robotics, DevOps, IoT, VR/Gaming, etc. Here are some of the key specialties of all three vendors.
AWS
Being the first and only one in the cloud market has many benefits, and Amazon has certainly taken advantage of that. Amazon has advanced specifically in AI and machine learning related tools. AWS DeepLens is an AI-powered camera that you can use to develop and deploy machine learning algorithms. It helps you with OCR and image recognition. Similarly, Amazon has launched an open source library called “Gluon” which helps with deep learning and neural networks. You can use this library to learn how neural networks work, even if you lack any technical background. Another service that Amazon offers is SageMaker. You can use SageMaker to train and deploy your machine learning models. It contains the Lex conversational interface, which is the backbone of Alexa, Lambda, and Greengrass IoT messaging services.
Another unique (and recent) offering from AWS is IoT twinmaker. This service can create digital twins of real-world systems like factories, buildings, production lines, etc.
AWS is even providing a service for Quantum computing called AWS Braket.
Azure
Azure excels where you are already using some Microsoft products, especially on-premises Microsoft products. Organizations already using Microsoft products prefer to use Azure instead of other cloud vendors because Azure offers a better and more robust integration with Microsoft products.
Azure has excellent services related to ML/AI and cognitive services. Some notable services include Bing web search API, Face API, Computer vision API, text analytics API, etc.
Google
Google is the current leader of all cloud providers regarding AI. This is because of their open-source Google library TensorFlow, the most popular library for developing machine learning applications. Vertex AI and BigQueryOmni are also beneficial services offered lately. Similarly, Google offers rich services for NLP, translation, speech, etc.
Pros and Cons
Let’s summarize the pros and cons for all three cloud vendors:
AWS
Pros:
An extensive list of services
Huge market share
Support for large businesses
Global reach
Cons:
Pricing model. Many companies struggle to understand the cost structure. Although AWS has improved the UX of its cost-related reporting in the AWS console, many companies still hesitate to use AWS because of a perceived lack of cost transparency
Azure
Pros:
Excellent integration with Microsoft tools and software
Broader feature set
Support for open source
Cons:
Geared towards enterprise customers
Google
Pros:
Strong integration with open source tools
Flexible contracts
Good DevOps services
The most cost-efficient
The preferred choice for startups
Good ML/AI-based services
Cons:
A limited number of services as compared to AWS and Azure
As mentioned earlier, AWS has the largest market share compared to other cloud vendors. That means more companies are using AWS, and there are more vacancies in the market for AWS-certified professionals. Here are main reasons why you would choose to learn AWS:
Azure is the second largest cloud service provider. It is ideal for companies that are already using Microsoft products. Here are the top reasons why you would choose to learn Azure:
Ideal for experienced user of Microsoft services
Azure certifications rank among the top paying IT certifications
If you’re applying for a company that primarily uses Microsoft Services
Google
Although Google is considered an underdog in the cloud market, it is slowly catching up. Here’s why you may choose to learn GCP.
While there are fewer job postings, there is also less competition in the market
GCP certifications rank among the top paying IT certifications
Most valuable IT Certifications
Keen to learn about the top paying cloud certifications and jobs? If you look at the annual salary figures below, you can see the average salary for different cloud vendors and IT companies, no wonder AWS is on top. A GCP cloud architect is also one of the top five. The Azure architect comes at #9.
Which cloud certification to choose depends mainly on your career goals and what type of organization you want to work for. No cloud certification path is better than the other. What matters most is getting started and making progress towards your career goals. Even if you decide at a later point in time to switch to a different cloud provider, you’ll still benefit from what you previously learned.
Over time, you may decide to get certified in all three – so you can provide solutions that vary from one cloud service provider to the next.
Don’t get stuck in analysis-paralysis! If in doubt, simply get started with AWS certifications that are the most sought-after in the market – especially if you are at the very beginning of your cloud journey. The good news is that you can become an AWS expert when enrolling in our value-packed training.
Further Reading
You may also be interested in the following articles:
Just wanted to put my experience out there for some people looking to take this exam, I believe the resources given here by everyone else are amazing and will work. I used LinkedIn Learning's AZ-900 prep which was about 7 hours of content, took 2 practice tests on the Learn site and received an 82% and 94%, watched John Savill's cram video, and took some good notes along the way. I completed MS-900 2 weeks before this, it was applicable to the AZ-900 for cloud concepts (IaaS, PaaS, SaaS) which I believe are the easiest to learn and also account for about 20% of the AZ-900. The exam was very straight forward, very few curveballs, and around 34 questions. I had plenty of time to double check my work. Good luck to everyone who takes it! I think I will begin working on a home lab after this and figure out if I go down the SC route or continue with the AZ-104. submitted by /u/RockPaperSavior [link] [comments]
Hello Everyone, I failed my AZ-305 exam about a year and a half ago. Since then, I purchased MeasureUp but didn't take the exam; I still have the PDF file. Despite studying and watching videos, I became discouraged and demotivated. Now, I want to finish this and sit for the exam. Currently, I don't feel capable of re-studying everything again. I have access to Tutorial Dojo, but I feel it's not enough. Do you think purchasing the new version of MeasureUp is a good idea, or will it not make much difference? What are your thoughts? submitted by /u/HardLearner01 [link] [comments]
5 questions into the exam and Peason Vue claimed they couldn't see me on the webcam. Resolving that was a pain but got sorted swifty which was nice. Not really sure I can advise anything different than what has already been said. I went through MS Learn once and took notes. Then used Tutorials Dojo's practice exams to identify areas I was weak, made sure to amend my notes to cover those areas in more detail and went from there. Perhaps my one piece of advice is to just book the exam. I booked the exam on Tuesday for today, although that isn't indicitive of how much time I spent studying. Otherwise I feel I could have put it off for another week, but having a fixed deadline meant I had something to work towards. submitted by /u/overcookedchicken [link] [comments]
Hello everyone, I'm a young self taught dev who started with C++ and moved onto some web dev doing it as a hobby for about a year. An opportunity came up in my company about a year ago for a Junior Full Stack Engineer position, I applied and got the job. I have been coding for a little over a year at this point I have general networking fundamentals from Net+, and have experience with .NET and general programming knowledge. I have no degree so there were some contingencies on my offer, 2 of those being to get the AZ-900 (no problem) and another to get the AZ-204, after studying for the AZ-900 l figured the 204 was going to be similar and something I could do over a weekend.. boy was I wrong. Although my company uses some Azure solutions and I work with C#/.NET daily I have never implemented anything with Azure and only have the surface level knowledge for the AZ-900. I don't plan to be a Cloud Architect or anything similar, but my company insists on me getting the AZ-204, even though the senior devs on my team and my tech lead only have to get the AZ-900. How cooked am I? Any Advice on studying? I was considering rebuilding one of my web apps using Azure Services to learn. The good news is I have until February to complete it but my boss wants to see significant progress by November. submitted by /u/ehm-- [link] [comments]
Background: I've been interacting with Azure (Compute, Storage) & Entra (Identity) on a daily basis for 4 years. I had previously studied for this exam back in 2023, but realised I should take the AZ-900 (lol) then come back; 2023 passed AZ-900. I had a chip on my shoulder booking this test, gave myself a lil less than a month to prep and sit the exam. My prep work included watching Udemy courses (Scott Duffy), taking the official Microsoft Practice exams and utilising works test environment. I thought I had this in the bag, even though I failed the practice test more than I had passed :D... In comes a premium practice test my wife purchased for me... the reality of this exam HIT!! With haymakers. This practice exam was serious. It made me wish I had took this exam a LOT! more seriously and gave myself a lot more time to be confident. Took the exam this AM (GMT) and passed (735). If you're reading this wondering how this exam is or feeling nervous, anxious or just thinking about embarking on this path, my advice to you is: Begin your studying yesterday 😀 Give yourself time to soak all this in Try get some practice in, and tbh in-general. If you run your own homelab, you're on the right track. As you can shift the logic over. Just have to know Azure mechanisms/syntax Give yourself a limit of 1! re-schedule, don't lose momentum Good luck whoever you are submitted by /u/og_osbrain [link] [comments]
Good day, I've already started preparing for the AZ-900, but I see that the well-known sources, including books and YouTube channels like Adam and Savill, are from 2021. I noticed the content is similar but not exactly. Have any topics been added or removed since 2021? submitted by /u/lelouch-2022 [link] [comments]
I'm at a crossroads at the moment I need some sincere advice. I'm in a fortunate situation where I can devote a good portion of 7 months to learning a new field ( up to16 hours a day working toward this goal) with my current savings for in demand tech skills. I've always been good with software/hardware and general CPU knowledge much more than the average person. I'm also very good at navigating LLM's and have decent prompt engineering skills by playing around with AI models. I'm leaning towards GCP(or the other two in the industry), ML & Data science, but im open to suggestions. I'm about to become a father in the next 7 months. It caught me and my wife by surprise but generally that's how these things like this happen. I need to know the sincere requirements for breaking into this industry more than just passing the GCP exam. • If I decide to go through the Coursera cloud courses necessary to take the exam is it worth it? (That really is the main one just want to know it's not a waste of my time) • If it is worth taking and passing the exam what other technical knowledge would be helpful in landing a good paying career cloud engineering associate position? some other area's of knowledge that would make me stand out. • I've been told that it's also a smart idea to take the Google IT professional course at Coursera also. • If there is another in demand field that I don't know about I'm all ears. I could use all of your professional advice. Thank you ahead of time for your input. submitted by /u/skylimit36 [link] [comments]
I have done AWS Solutions Architect Associate but I need to learn Azure for a nice internship opportunity I might really get. So I have decided to do the AZ-104 certification. How easy will I find doing it? I think a lot of concepts overlap between AWS and Azure. To someone who was in the same situation, how long did it take for you? submitted by /u/Appropriate_Try_7040 [link] [comments]
Apart from Microsoft Learn, what did you use to pass the SC-200? I have read through all the modules, but i still score low on the practice assessment. submitted by /u/GordonK24 [link] [comments]
Finally. It has happened . After all the mock papers which I took from crackcerts and online . I been failing them and only able to hope that I won't fail my AZ-104. However this group has taught me something to always be positive and hope for the best. Cheers to all and yes I scored 708 for the actual exam . submitted by /u/Adventurous_Emu8820 [link] [comments]
Use John Savil’s 3 hour cram lecture as well as taking 1 day to study MS-Learn resource. Working as a SWE mainly touching on Java and AWS. Had an encounter with a AI project that required me to use Azure for a few month. Sparking my interest to prepare for other exam. Will be taking AWS Solution Architect in December 2024. Wondering if I should explore other 900 cert or go for the 104? submitted by /u/Disastrous_Motor9856 [link] [comments]
I was certified for old SQL Server but am doing the cloud certificates now to stay current. I did the virtual training day to get a 50% off voucher and then followed the MS Learn path online. I didn’t have to watch any videos. I tried the practice exam on MS Learn and also the one on MeasureUp. They’re very similar to each other and act as a good benchmark for readiness. I think this training was a lot weaker than AZ-900. It doesn’t show you much SQL syntax but a few practice questions are on that. That’s not a problem for me but may be for newbies. Also Microsoft doesn’t do itself any favours having 5 different flavours of pipelines and ETL cloud tools, and it doesn’t really ever explain the difference much between them. I mean ADF and then Synapse which has some ADF and then Fabric which maybe also has ADF but then also HDInsight and something else. It could really have benefited with a few tables and actual comparisons. Also you need to create a school or work account to log into Fabric to even see it, and once you log into that your Azure Portal on your personal account won’t work because the credentials conflict. Hence multiple browsers and lots of logging in and out. I dread to think about how you can pay for Fabric later once the trial runs out when you’re not an actual company. Then for some of the practice samples they’re using a 2TB source file which is a bit much and may put you past free monthly limits if you weren’t in trial mode. 925/1000. I’ll aim for AI-900 in a week’s time. submitted by /u/codykonior [link] [comments]
I was doing some practice exams in TD and I found something that confused me: https://preview.redd.it/5ax82wh3sfod1.png?width=1200&format=png&auto=webp&s=b4b2b7b6bd1d239a64bb1b6d836833540f593cbc In this case TD2 is already GRS, so shouldn't we only "work" on TD1? The answer could be to "Upgrade TD1 to GPv2", because we need GPv2 to upgrade TD1 from LRS to ZRS Thanks submitted by /u/TB-124 [link] [comments]
HI everyone. I'm working as a data engineer currently. I'm focused almost only on batch processes and ETL's, for providing analytical databases for regulatory and financial needs. Recently we moved our processes to Databricks, in which I went to get some certifications from them (got the data analyst, data engineer professional, and spark developer). I already have the azure DP-900 cert, and now I'm aiming for the next azure one. I have in sight AZ-900, AZ-203 and AZ-204. One of the topics I need to evolve in my carreer is stuff outside the "batch etl world", as I'm very weak on software design and creation, and just know the basics of API's and Kubernetes, for example. Given the scenario, for anyone who has some experience in this... which azure certification would you fit more for me, and why? (not only as a "hey, I have a certificate!", but also as new knowledge) I did a course on 203 (by my company), and it was basically 100% synapse (we don't use synapse here somehow...) Then I tried the practice exam and it was about 50% synapse so I felt lost there. Thanks in advance! submitted by /u/Rudy_Roughnight [link] [comments]
I passed my SC-200 today with a score of 727. I got 55 questions, and they were way different from what I had prepared for. I honestly thought I was going to fail. I didn’t have any issues with freezing during the exam or while using MS Learn, everything went smoothly. But when I tried to enlarge the questions to full size, they all disappeared, leaving me with a blank white screen. I had to contact the proctor and relaunch the exam. Time went so quickly I couldn’t review most of the questions. Anyway, I’m happy I passed on my first try, even though I don’t have a lot of experience. 😁 Study materials I used: - MS Learn and practice tests - Whizlabs - Microsoft Labs Also, I don’t think Whizlabs was much help for the exam, so I wouldn’t really recommend it. Thanks to all of you. submitted by /u/lazyguy_69 [link] [comments]
I know, I know, it's been asked plenty of times but googling around I haven't really seen anything other than MSLearn that people can vouch for. With respect to MSLearn it is mind-numbingly boring and unfortunately we don't have someone like John Savill to cover the content, or do we? For KQL / Sentinel, I'm going through 'Kusto Query Language (KQL) from Scratch' on Pluralsight and then hopefully going through the rest of Microsoft Sentinel Ninja, as well as supplementing it with KC7. If anyone has recommendations for SC-200 as a whole, I'd kindly appreciate it submitted by /u/12wingsandchips [link] [comments]
I’m currently working through Azure Data Fundamentals (DP-900) and the exercises require the creation of resources (Azure Synapse Analytics in this case). When creating, I’m provided an estimated cost as seen in the photo. I really want to do these hands-on exercises but the risk of accruing charges is off putting. Is this normal after the trial period and switching to the Pay-as-you-go model? submitted by /u/Worldeyeknow [link] [comments]
Top-paying Cloud certifications:
Google Certified Professional Cloud Architect — $175,761/year AWS Certified Solutions Architect – Associate — $149,446/year Azure/Microsoft Cloud Solution Architect – $141,748/yr Google Cloud Associate Engineer – $145,769/yr AWS Certified Cloud Practitioner — $131,465/year Microsoft Certified: Azure Fundamentals — $126,653/year Microsoft Certified: Azure Administrator Associate — $125,993/year A Twitter List by enoumen A Twitter List by enoumen
Djamgatech – Multilingual and Platform Independent Cloud Certification and Education App for AWS Azure Google Cloud
Djamgatech is the ultimate Cloud Education Certification App. It is an EduFlix App for AWS, Azure, Google Cloud Certification Prep, School Subjects, Python, Math, SAT, etc.[Android, iOS]
Technology is changing and is moving towards the cloud. The cloud will power most businesses in the coming years and is not taught in schools. How do we ensure that our kids and youth and ourselves are best prepared for this challenge?
Building mobile educational apps that work offline and on any device can help greatly in that sense.
The ability to tab on a button and learn the cloud fundamentals and take quizzes is a great opportunity to help our children and youth to boost their job prospects and be more productive at work.
The App covers the following certifications : AWS Cloud Practitioner Exam Prep CCP CLF-C01, Azure Fundamentals AZ 900 Exam Prep, AWS Certified Solution Architect Associate SAA-C02 Exam Prep, AWS Certified Developer Associate DVA-C01 Exam Prep, Azure Administrator AZ 104 Exam Prep, Google Associate Cloud Engineer Exam Prep, Data Analytics for AWS DAS-C01, Machine Learning for AWS and Google, AWS Certified Security – Specialty (SCS-C01), AWS Certified Machine Learning – Specialty (MLS-C01), Google Cloud Professional Machine Learning Engineer and more… [Android, iOS]
Features: – Practice exams – 1000+ Q&A updated frequently. – 3+ Practice exams per Certification – Scorecard / Scoreboard to track your progress – Quizzes with score tracking, progress bar, countdown timer. – Can only see scoreboard after completing the quiz. – FAQs for most popular Cloud services – Cheat Sheets – Flashcards – works offline
Note and disclaimer: We are not affiliated with AWS, Azure, Microsoft or Google. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps
GCP, Google Cloud Platform, has been a game changer in the tech industry. It allows organizations to build and run applications on Google’s infrastructure. The GCP platform is trusted by many companies because it is reliable, secure and scalable. In order to become a GCP certified professional, one must pass the GCP Professional Architect exam. The GCP Professional Architect exam is not easy, but with the right practice questions and answers dumps, you can pass the GCP PA exam with flying colors.
Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761
The Google Certified Cloud Professional Architect Exam assesses your ability to:
Design and plan a cloud solution architecture
Manage and provision the cloud solution infrastructure
Design for security and compliance
Analyze and optimize technical and business processes
Manage implementations of cloud architecture
Ensure solution and operations reliability
Designing and planning a cloud solution architecture
This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.
Managing and provisioning a solution Infrastructure: 20%
This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.
Designing for security and compliance: 12%
This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.
Managing implementation: 10%
This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).
This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.
Analyzing and optimizing technical and business processes: 16%
This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.
Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps that will help you ace the GCP Professional Architect exam:
You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C
Question 1:Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?
A. Have the vehicles in the field stream the data directly into BigQuery.
B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.
C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.
D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.
ANSWER1:
D
Notes/References1:
D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.
Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?
A. Execute queries against data stored in a Cloud SQL.
B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.
C. Execute queries against data stored on daily partitioned BigQuery tables.
D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.
ANSWER2:
B
Notes/References2:
B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.
Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?
A. Use multiple connectivity subsystems for redundancy.
B. Require IPv6 for connectivity to ensure a secure address space.
C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.
D. Use a functional programming language to isolate code execution cycles.
E. Treat every microservice call between modules on the vehicle as untrusted.
F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.
ANSWER3:
E and F
Notes/References3:
E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.
F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.
Question 4: For this question, refer to the Company A case study.
Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?
A. OpEx/CapEx allocation, LAN change management, capacity planning
B. Capacity planning, TCO calculations, OpEx/CapEx allocation
C. Capacity planning, utilization measurement, data center expansion
D. Data center expansion, TCO calculations, utilization measurement
ANSWER4:
B
Notes/References4:
B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.
Question 5: For this question, refer to the Company A case study.
You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?
A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.
B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.
C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.
ANSWER5:
C
Notes/References5:
C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.
Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?
A. RPM/DEB
B. Containers
C. Chef/Puppet
D. Virtual machines
ANSWER6:
B
Notes/References6:
B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.
Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?
A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.
B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table.
C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.
D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.
ANSWER7:
C
Notes/References7:
C is correct becauseCloudPub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.
Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?
A. Load logs into BigQuery.
B. Load logs into Cloud SQL.
C. Import logs into Stackdriver.
D. Insert logs into Cloud Bigtable.
E. Upload log files into Cloud Storage.
ANSWER8:
A and E
Notes/References8:
A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.
E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.
Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?
A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
ANSWER9:
C
Notes/References9:
C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.
Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?
A. Add each tier to a different subnetwork.
B. Set up software-based firewalls on individual VMs.
C. Add tags to each tier and set up routes to allow the desired traffic flow.
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.
ANSWER10:
D
Notes/References10:
D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.
Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?
A. Use gsutil.
B. Use gcloud.
C. Use GCS REST API.
D. Use Storage Transfer Service.
ANSWER11:
A
Notes/References11:
A is correct because gsutil gives you access to write data to Cloud Storage.
Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?
A. Encrypt the message client-side using block-based encryption with a shared key.
B. Tag messages client-side with the originating user identifier and the destination user.
C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.
ANSWER12:
D
Notes/References12:
D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.
Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?
A. In the source code
B. In an environment variable
C. In a key management system
D. In a config file that has restricted access through ACLs
Question 14: For this question, refer to the Company B case study.
Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?
A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL
B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc
ANSWER14:
B
Notes/References14:
B is correct because: – Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers. – Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices. – Cloud Pub/Sub can ingest the streaming data from the mobile users. BigQuery can query more than 10 TB of historical data.
Question 15: For this question, refer to the Company B case study.
Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.
ANSWER15:
A
Notes/References15:
A is correct because simulating production load in GCP can scale in an economical way.
Question 16:For this question, refer to the Company B case study.
Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:
Services are deployed redundantly across multiple regions in the US and Europe
Only frontend services are exposed on the public internet.
They can reserve a single frontend IP for their fleet of services.
Deployment artifacts are immutable
Which set of products should they use?
A. Cloud Storage, Cloud Dataflow, Compute Engine
B. Cloud Storage, App Engine, Cloud Load Balancing
C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing
D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager
ANSWER16:
C
Notes/References16:
C is correct because: –Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers. –Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose. –Container Registry is a single place for a team to manage Docker images for the services.
Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
A. Org viewer, Project owner
B. Org viewer, Project viewer
C. Org admin, Project browser
D. Project owner, Network admin
ANSWER17:
B
Notes/References17:
B is correct because: –Org viewer grants the security team permissions to view the organization’s display name. –Project viewer grants the security team permissions to see the resources within projects.
Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?
A. Use persistent disks to store the state. Start and stop the VM as needed.
B. Use the –auto-delete flag on all persistent disks before stopping the VM.
C. Apply VM CPU utilization label and include it in the BigQuery billing export.
D. Use BigQuery billing export and labels to relate cost to groups.
E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.
ANSWER18:
A and D
Notes/References18:
A is correct because persistent disks will not be deleted when an instance is stopped.
D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.
Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?
A. Configure a new load balancer for the new version of the API.
B. Reconfigure old clients to use a new endpoint for the new API.
C. Have the old API forward traffic to the new API based on the path.
D. Use separate backend services for each API path behind the load balancer.
ANSWER19:
D
Notes/References19:
D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.
Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?
A. Increase the virtual machine’s memory to 64 GB.
B. Create a new virtual machine running PostgreSQL.
C. Dynamically resize the SSD persistent disk to 500 GB.
D. Migrate their performance metrics warehouse to BigQuery.
ANSWER20:
C
Notes/References20:
C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.
Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?
A. Use Google’s Cloud CDN.
B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.
C. Do nothing.
D. Use global BigTable storage.
E. Use a global Cloud Spanner instance.
F. Migrate the data to a new multi-regional GCS bucket.
G. Change the storage class to multi-regional.
ANSWER21:
A
Notes/References21:
Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough.
Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?
A. Python, Flask, App Engine Standard
B. Ruby, Nginx, GKE
C. HTML, CSS, Cloud Storage
D. Node.js, Express, Cloud Functions
E. Rust, Rocket, App Engine Flex
F. Perl, CGI, GCE
ANSWER22:
A
Notes/References22:
The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too.
Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?
A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.
B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.
C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.
D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.
E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.
ANSWER23:
C
Notes/References23:
Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality.
Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?
A. Pause all Cloud Functions via the UI and unpause them when work starts back up.
B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.
C. Delete all Cloud Functions and recreate them when work starts back up.
D. Convert all Cloud Functions to run as App Engine Standard applications during the break.
E. None of these options is a good suggestion.
ANSWER24:
E
Notes/References24:
Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.
Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?
A. Container Registry
B. Cloud Storage
C. Cloud Filestore
D. Persistent Disk
ANSWER25:
C
Notes/References25:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question.
Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?
A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.
B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.
C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.
D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.
ANSWER26:
A
Notes/References26:
The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn’t also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality.
Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?
A. Cloud Storage
B. Logstash
C. Cloud Monitoring
D. Cloud Logging
E. BigQuery
F. BigTable
ANSWER27:
D
Notes/References27:
Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on.
Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?
A. Compute Engine
B. Cloud Filestore
C. Cloud Storage
D. Persistent Disk
E. Container Registry
F. Cloud Source Repositories
G. Cloud Build
H. Nearline
ANSWER28:
C
Notes/References28:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer.
Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?
A. Use a global Cloud Spanner instance.
B. Change the storage class to multi-regional.
C. Use Google’s Cloud CDN.
D. Migrate the data to a new regional GCS bucket.
E. Do nothing.
F. Use global BigTable storage.
ANSWER29:
E
Notes/References29:
Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket.
Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?
A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.
B. Use Google’s Cloud CDN.
C. Use global BigTable storage.
D. Do nothing.
E. Migrate the data to a new multi-regional GCS bucket.
F. Use a global Cloud Spanner instance.
ANSWER30:
E
Notes/References30:
Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option.
Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?
A. Six
B. One
C. Three
D. Four
E. Two
F. Nine
ANSWER31:
A
Notes/References31:
A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system.
Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?
A. Container Registry
B. Cloud Storage
C. Persistent Disk
D. Nearline
E. Cloud Source Repositories
F. Cloud Build
G. Cloud Filestore
H. Compute Engine
ANSWER32:
F
Notes/References32:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse.
Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?
A. Two
B. One
C. Three
D. Nine
E. Four
F. Six
ANSWER33:
D
Notes/References33:
A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system.
Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?
A. Cloud Filestore
B. Coldline
C. Nearline
D. Persistent Disk
E. Cloud Source Repositories
F. Cloud Storage
G. Container Registry
ANSWER34:
B
Notes/References34:
There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline.
Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?
A. Container Registry
B. Cloud Filestore
C. Cloud Source Repositories
D. Persistent Disk
E. Cloud Storage
F. Code Build
G. Nearline
ANSWER35:
A
Notes/References35:
Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry.
Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?
A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.
B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.
C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.
D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.
E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.
ANSWER36:
C
Notes/References36:
You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor’s own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you’ll need to grant securityAdmin, not networkViewer.
Question 37:You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?
A. One
B. Six
C. Four
D. Three
E. Nine
ANSWER37:
F
Notes/References37:
A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets.
Question 38:You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?
A. Binary Authorization
B. Cloud IAM
C. Security Key Enforcement
D. Cloud SCC
E. Cloud KMS
ANSWER38:
A
Notes/References38:
Cloud KMS is Google’s product for managing encryption keys. Security Key Enforcement is about making sure that people’s accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments.
Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?
A. PCI
B. PII
C. SOX
D. GDPR
E. HIPAA
ANSWER39:
B and D
Notes/References39:
There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”.
Question 40:Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?
A. Their customers located in the EU may require them to delete their user data and provide evidence of such.
B. They will also need to pass a SOX audit.
C. They handle money-linked information.
D. Their system deals with medical information.
ANSWER40:
D
Notes/References40:
SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others).
Question 41:Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?
A. They handle money-linked information.
B. Their customers located in the EU may require them to delete their user data and provide evidence of such.
C. Their system deals with medical information.
D. They will also need to pass a SOX audit.
ANSWER42:
A
Notes/References42:
SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much.
Question 43:Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?
A. Their system deals with medical information.
B. Their customers located in the EU may require them to delete their user data and provide evidence of such.
C. They will also need to pass a SOX audit.
D. They handle money-linked information.
ANSWER43:
B
Notes/References43:
SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others).
Question 44:For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?
A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud
B. Lift and shift all servers at one time
C. Lift and shift one application at a time
D. Lift and shift one server at a time
E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system
F. Enact their disaster recovery plan and fail over
ANSWER44:
F
Notes/References44:
The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time.
Question 45:Which of the following commands is most likely to appear in an environment setup script?
A. gsutil mb -l asia gs://${project_id}-logs
B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm
C. gcloud compute instances create –zone–machine-type=f1-micro newvm
D. gcloud compute ssh ${instance_id}
E. gsutil cp -r gs://${project_id}-setup ./install
F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/
ANSWER45:
A
Notes/References45:
The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You’re not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances.
Question 46:Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?
A. /dealerLocations/get
B. /dealerLocations
C. /dealerLocations/list
D. Source and destination
E. /getDealerLocations
ANSWER46:
B
Notes/References46:
It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it’s important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint.
Question 47:Which of the following commands is most likely to appear in an instance shutdown script?
A. gsutil cp -r gs://${project_id}-setup ./install
B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm
C. gcloud compute ssh ${instance_id}
D. gsutil mb -l asia gs://${project_id}-logs
E. gcloud compute instances delete ${instance_id}
F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/
G. gcloud compute instances create –zone–machine-type=f1-micro newvm
ANSWER47:
F
Notes/References47:
The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you’d want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)
Question 48:It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?
A. Escalate the decision to the business manager responsible for the SLA
B. Take the system offline
C. Revert the system to the state it was in on Friday morning
D. Investigate the cause of the issue
ANSWER48:
B
Notes/References48:
The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn’t make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we’d better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO.
Question 49:Which of the following are not processes or practices that you would associate with DevOps?
A. Raven-test the candidate
B. Obfuscate the code
C. Only one of the other options is made up
D. Run the code in your cardinal environment
E. Do a canary deploy
ANSWER49:
A and D
Notes/References49:
Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you’re making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization’s source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn’t a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.
Question 50:Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?
A. Object lifecycle management
B. BigQuery Slots
C. Committed use discounts
D. Sustained use discounts
E. Managed instance group autoscaling
F. Pub/Sub topic centralization
ANSWER50:
B and C
Notes/References50:
Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn’t really a thing). Sustained use discounts can reduce costs, but that’s not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE’s Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don’t use them; you’ve already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won’t pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex.
Question 51:In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?
A. The scrum master is the one who decides
B. The lead architect should get the final say
C. The product owner should get the final say
D. You should put it to a vote of key stakeholders
E. You should put it to a vote of all stakeholders
ANSWER51:
C
Notes/References51:
In Scrum, it is the Product Owner’s role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven’t ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context.
Question 52:Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?
A. Split testing
B. Red-Black
C. A/B
D. Canary
E. Rolling
F. Blue-Green
G. Flex downtime
ANSWER52:
D and E
Notes/References52:
A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application.
Question 53:You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?
A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on
B. Hiring and retaining 10X developers is critical to project success
C. A key goal of a proper post-mortem is to identify what processes need to be changed
D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project
E. A key goal of a proper post-mortem is to determine who needs additional training
ANSWER53:
A and C
Notes/References53:
There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future.
Question 54:Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?
A. The next month’s SLA will be increased.
B. The next month’s SLO will be reduced.
C. Your client(s) will have to pay you extra.
D. You will have to pay your client(s).
E. There is no impact on payments.
F. There is not enough information to make a determination.
ANSWER54:
D
Notes/References54:
It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say.
Question 55:Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?
A. The next month’s SLA will be increased.
B. There is no impact on payments.
C. There is not enough information to make a determination.
D. Your client(s) will have to pay you extra.
E. The next month’s SLO will be reduced.
F. You will have to pay your client(s).
ANSWER55:
B
Notes/References55:
It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say.
Question 56:For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?
A. Configure the autoscaling thresholds to follow changing load
B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand
C. Run cron jobs on their application servers to scale down at night and up in the morning
D. Use Cloud Load Balancing to balance the traffic highs and lows
D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning
E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace
ANSWER56:
A
Notes/References56:
The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over.
Google Cloud Latest News, Questions and Answers online:
Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.
With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.
With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem!Cloud Run is made for Devops engineers, App Engine is made for developers.Read more here…
The best choice depends on what you want to optimize, your use-cases and your specific needs.
If your objective is the lowest latency, choose Cloud Run.
Indeed, Cloud Run use always 1 vCPU (at least 2.4Ghz) and you can choose the memory size from 128Mb to 2Gb.
With Cloud Functions, if you want the best processing performance (2.4Ghz of CPU), you have to pay 2Gb of memory. If your memory footprint is low, a Cloud Functions with 2Gb of memory is overkill and cost expensive for nothing.
Cutting cost is not always the best strategy for customer satisfaction, but business reality may require it. Anyway, it highly depends of your use-case
Both Cloud Run and Cloud Function round up to the nearest 100ms. As you could play with the GSheet, the Cloud Functions are cheaper when the processing time of 1 request is below the first 100ms. Indeed, you can slow the Cloud Functions vCPU, with has for consequence to increase the duration of the processing but while staying under 100ms if you tune it well. Thus less Ghz/s are used and thereby you pay less.
the cost comparison between Cloud Functions and Cloud Run goes further than simply comparing a pricing list. Moreover, on your projects, you often will have to use the 2 solutions for taking advantage of their strengths and capabilities.
My first choice for development is Cloud Run. Its portability, its testability, its openess on the libraries, the languages and the binaries confer it too much advantages for, at least, a similar pricing, and often with a real advantage in cost but also in performance, in particular for concurrent requests. Even if you need the same level of isolation of Cloud functions (1 instance per request), simply set the concurrent param to 1!
In addition, the GA of Cloud Run is applied on all containers, whatever the languages and the binaries used. Read more here…
Google Cloud Storage : What bucket class for the best performance?: Multiregional buckets perform significantly better for cross-the-ocean fetches, however the details are a bit more nuanced than that. The performance is dominated by the latency of physical distance between the client and the cloud storage bucket.
If caching is on, and your access volume is high enough to take advantage of caching, there’s not a huge difference between the two offerings (that I can see with the tests). This shows off the power of Google’s Awesome CDN environment.
If caching is off, or the access volume is low enough that you can’t take advantage of caching, then the performance overhead is dominated directly by physics. You should be trying to get the assets as close to the clients as possible, while also considering cost, and the types of redundancy and consistency you’ll need for your data needs.
Conclusion:
GCP, or the Google Cloud Platform, is a cloud-computing platform that provides users with access to a variety of GCP services. The GCP Professional Architect Engineeer exam is designed to test a candidate’s ability to design, implement, and manage GCP solutions. The GCP questions cover a wide range of topics, from basic GCP concepts to advanced GCP features. To become a GCP Certified Professional, you must pass the GCP PE exam. Below are some basics GCP Questions to answer to get yourself familiarized with the Google Cloud Platform:
1) What is GCP? 2) What are the benefits of using GCP? 3) How can GCP help my business? 4) What are some of the features of GCP? 5) How is GCP different from other clouds? 6) Why should I use GCP? 7) What are some of GCP’s strengths? 8) How is GCP priced? 9) Is GCP easy to use? 10) Can I use GCP for my personal projects? 11) What services does GCP offer? 12) What can I do with GCP? 13) What languages does GCP support? 14) What platforms does GCP support? 15) Does GPC support hybrid deployments? 16) Does GPC support on-premises deployments?
17) Is there a free tier on GPC ?
18) How do I get started with usingG CP ?
Top- high paying certifications:
Google Certified Professional Cloud Architect – $139,529
First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.
I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.
In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.
As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.
I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.
One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.
I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.
All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.
Google Associate Cloud Engineer Exam Preparation: Questions and Answers Dumps
GCP, or the Google Cloud Platform, is a cloud-computing platform that provides users with access to a variety of GCP services. The GCP ACE exam is designed to test a candidate’s ability to design, implement, and manage GCP solutions. The GCP ACE questions cover a wide range of topics, from basic GCP concepts to advanced GCP features. To become a GCP Certified Associate Cloud Engineer, you must pass the GCP ACE exam. However, before you can take the exam, you must first complete the GCP ACE Quizzes below. The GCP ACE Quiz is designed to help you prepare for the GCP ACE exam by testing your knowledge of GCP concepts. After you complete the GCP ACE Quiz, you will be able to pass the GCP Practice Exam with ease.
GCP, Google Cloud Platform, has been a game changer in the tech industry. It allows organizations to build and run applications on Google’s infrastructure. The GCP platform is trusted by many companies because it is reliable, secure and scalable. In order to become a GCP.
The Google Cloud Associate Engineer Salary Average- $145,769/yr
An Associate Cloud Engineer deploys applications, monitors operations, and manages enterprise solutions.
The Associate Cloud Engineer exam assesses your ability to: Set up a cloud solution environment, Plan and configure a cloud solution, Deploy and implement a cloud solution, Ensure successful operation of a cloud solution, Configure access and security.
Question 1: You are a project owner and need your co-worker to deploy a new version of your application to App Engine. You want to follow Google’s recommended practices. Which IAM roles should you grant your co-worker?
Question 2: Your company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should you do?
A. Link a credit card with a monthly limit equal to your budget.
Question 3: You have a project using BigQuery. You want to list all BigQuery jobs for that project. You want to set this project as the default for the bq command-line tool. What should you do?
A. Use “gcloud config set project” to set the default project.
B. Use “bq config set project” to set the default project.
Question 4: Your project has all its Compute Engine resources in the europe-west1 region. You want to set europe-west1 as the default region for gcloud commands. What should you do?
A. Use Cloud Shell instead of the command line interface of your device. Launch Cloud Shell after you navigate to a resource in the europe-west1 region. The europe-west1 region will automatically become the default region.
B. Use “gcloud config set compute/region europe-west1” to set the default region for future gcloud commands.
C. Use “gcloud config set compute/zone europe-west1” to set the default region for future gcloud commands.
D. Create a VPN from on-premises to a subnet in europe-west1, and use that connection when executing gcloud commands.
Question 5: You developed a new application for App Engine and are ready to deploy it to production. You need to estimate the costs of running your application on Google Cloud Platform as accurately as possible. What should you do?
A. Create a YAML file with the expected usage. Pass this file to the “gcloud app estimate” command to get an accurate estimation.
B. Multiply the costs of your application when it was in development by the number of expected users to get an accurate estimation.
C. Use the pricing calculator for App Engine to get an accurate estimation of the expected charges.
D. Create a ticket with Google Cloud Billing Support to get an accurate estimation.
ANSWER 5:
C
Notes/Hint 5:
This is the proper way to estimate charges.
Question 6: Your company processes high volumes of IoT data that are time-stamped. The total data volume can be several petabytes. The data needs to be written and changed at a high speed. You want to use the most performant storage option for your data. Which product should you use?
A. Cloud Datastore
B. Cloud Storage
C. Cloud Bigtable
D. BigQuery
ANSWER 6:
C
Notes/Hint 6:
Cloud Bigtable is the most performant storage option to work with IoT and time series data.
Question 7: Your application has a large international audience and runs stateless virtual machines within a managed instance group across multiple locations. One feature of the application lets users upload files and share them with other users. Files must be available for 30 days; after that, they are removed from the system entirely. Which storage solution should you choose?
Buckets can be multi-regional and have lifecycle management.
Question 8: You have a definition for an instance template that contains a web application. You are asked to deploy the application so that it can scale based on the HTTP traffic it receives. What should you do?
A. Create a VM from the instance template. Create a custom image from the VM’s disk. Export the image to Cloud Storage. Create an HTTP load balancer and add the Cloud Storage bucket as its backend service.
B. Create a VM from the instance template. Create an App Engine application in Automatic Scaling mode that forwards all traffic to the VM.
C. Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer.
D. Create the necessary amount of instances required for peak user traffic based on the instance template. Create an unmanaged instance group and add the instances to that instance group. Configure the instance group as the Backend Service of an HTTP load balancer.
Question 9: You are creating a Kubernetes Engine cluster to deploy multiple pods inside the cluster. All container logs must be stored in BigQuery for later analysis. You want to follow Google-recommended practices. Which two approaches can you take?
A. Turn on Stackdriver Logging during the Kubernetes Engine cluster creation.
B. Turn on Stackdriver Monitoring during the Kubernetes Engine cluster creation.
C. Develop a custom add-on that uses Cloud Logging API and BigQuery API. Deploy the add-on to your Kubernetes Engine cluster.
D. Use the Stackdriver Logging export feature to create a sink to Cloud Storage. Create a Cloud Dataflow job that imports log files from Cloud Storage to BigQuery.
E. Use the Stackdriver Logging export feature to create a sink to BigQuery. Specify a filter expression to export log records related to your Kubernetes Engine cluster only.
Answer 9:
A and E
Notes/Hint 9:
Creating a cluster with Stackdriver Logging option will enable all the container logs to be stored in Stackdriver Logging.
Question 10: You need to create a new Kubernetes Cluster on Google Cloud Platform that can autoscale the number of worker nodes. What should you do?
A. Create a cluster on Kubernetes Engine and enable autoscaling on Kubernetes Engine.
B. Create a cluster on Kubernetes Engine and enable autoscaling on the instance group of the cluster.
C. Configure a Compute Engine instance as a worker and add it to an unmanaged instance group. Add a load balancer to the instance group and rely on the load balancer to create additional Compute Engine instances when needed.
D. Create Compute Engine instances for the workers and the master, and install Kubernetes. Rely on Kubernetes to create additional Compute Engine instances when needed.
Question 11: You have an application server running on Compute Engine in the europe-west1-d zone. You need to ensure high availability and replicate the server to the europe-west2-c zone using the fewest steps possible. What should you do?
A. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk.
B. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west1-d zone and then move the disk to europe-west2-c. Create a new VM with that disk.
C. Use “gcloud” to copy the disk to the europe-west2-c zone. Create a new VM with that disk.
D. Use “gcloud compute instances move” with parameter “–destination-zone europe-west2-c” to move the instance to the new zone.
Answer 11:
A
Notes/Hint 11:
This makes sure the VM gets replicated in the new zone.
Question 12: Your company has a mission-critical application that serves users globally. You need to select a transactional, relational data storage system for this application. Which two products should you consider
A. BigQuery
B. Cloud SQL
C. Cloud Spanner
D. Cloud Bigtable
E. Cloud Datastore
Answer 12:
B
Notes/Hint 12:
Cloud SQL is a relational and transactional database in the list.
Spanner is a relational and transactional database in the list.
Question 13: You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of traffic and needs to grow. You decide to add a node. What should you do?
A. Use “gcloud container clusters resize” with the desired number of nodes.
B. Use “kubectl container clusters resize” with the desired number of nodes.
C. Edit the managed instance group of the cluster and increase the number of VMs by 1.
D. Edit the managed instance group of the cluster and enable autoscaling.
Answer 13:
A
Notes/Hint 13:
This resizes the cluster to the desired number of nodes.
Question 14: You created an update for your application on App Engine. You want to deploy the update without impacting your users. You want to be able to roll back as quickly as possible if it fails. What should you do?
A. Delete the current version of your application. Deploy the update using the same version identifier as the deleted version.
B. Notify your users of an upcoming maintenance window. Deploy the update in that maintenance window.
C. Deploy the update as the same version that is currently running.
D. Deploy the update as a new version. Migrate traffic from the current version to the new version.
Question 15: You have created a Kubernetes deployment, called Deployment-A, with 3 replicas on your cluster. Another deployment, called Deployment-B, needs access to Deployment-A. You cannot expose Deployment-A outside of the cluster. What should you do?
A. Create a Service of type NodePort for Deployment A and an Ingress Resource for that Service. Have Deployment B use the Ingress IP address.
B. Create a Service of type LoadBalancer for Deployment A. Have Deployment B use the Service IP address.
C. Create a Service of type LoadBalancer for Deployment A and an Ingress Resource for that Service. Have Deployment B use the Ingress IP address.
D. Create a Service of type ClusterIP for Deployment A. Have Deployment B use the Service IP address.
Question 16: You need to estimate the annual cost of running a Bigquery query that is scheduled to run nightly. What should you do?
A. Use “gcloud query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
B. Use “bq query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
C. Use “gcloud estimate” to determine the amount billed for a single query. Multiply this amount by 365.
D. Use “bq estimate” to determine the amount billed for a single query. Multiply this amount by 365.
Answer 16:
B
Notes/Hint 16:
This is the correct way to estimate the yearly BigQuery querying costs.
Question 17: You want to find out who in your organization has Owner access to a project called “my-project”.What should you do?
A. In the Google Cloud Platform Console, go to the IAM page for your organization and apply the filter “Role:Owner”.
B. In the Google Cloud Platform Console, go to the IAM page for your project and apply the filter “Role:Owner”.
C. Use “gcloud iam list-grantable-role –project my-project” from your Terminal.
D. Use “gcloud iam list-grantable-role” from Cloud Shell on the project page.
Answer 17:
B
Notes/Hint 17:
B is correct because this shows you the Owners of the project.
Question 18: You want to create a new role for your colleagues that will apply to all current and future projects created in your organization. The role should have the permissions of the BigQuery Job User and Cloud Bigtable User roles. You want to follow Google’s recommended practices. How should you create the new role?
A. Use “gcloud iam combine-roles –global” to combine the 2 roles into a new custom role.
B. For one of your projects, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role. Use “gcloud iam promote-role” to promote the role from a project role to an organization role.
C. For all projects, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role.
D. For your organization, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role.
Answer 18:
D
Notes/Hint 18:
D is correct because this creates a new role with the combined permissions on the organization level.
Question 19: You work in a small company where everyone should be able to view all resources of a specific project. You want to grant them access following Google’s recommended practices. What should you do?
A. Create a script that uses “gcloud projects add-iam-policy-binding” for all users’ email addresses and the Project Viewer role.
B. Create a script that uses “gcloud iam roles create” for all users’ email addresses and the Project Viewer role.
C. Create a new Google Group and add all users to the group. Use “gcloud projects add-iam-policy-binding” with the Project Viewer role and Group email address.
D. Create a new Google Group and add all members to the group. Use “gcloud iam roles create” with the Project Viewer role and Group email address.
Question 20: You need to verify the assigned permissions in a custom IAM role. What should you do?
A. Use the GCP Console, IAM section to view the information.
B. Use the “gcloud init” command to view the information.
C. Use the GCP Console, Security section to view the information.
D. Use the GCP Console, API section to view the information.
Answer 20:
A
Notes/Hint 20:
A is correct because this is the correct console area to view permission assigned to a custom role in a particular project.
Question 21: Your coworker created a deployment for your application container. You can see the deployment under Workloads in the console. They’re out for the rest of the week, and your boss needs you to complete the setup by exposing the workload. What’s the easiest way to do that?
A. Create a new Service that points to the existing deployment.
B. Create a new DaemonSet.
C. Create a Global Load Balancer that points to the pod in the deployment.
D. Create a Static IP Address Resource for the Deployment.
Question 22: Your team is working on designing an IoT solution. There are thousands of devices that need to send periodic time series data for processing. Which services should be used to ingest and store the data?
A. Pub/Sub, Datastore
B. Pub/Sub, Dataproc
C. Dataproc, Bigtable
D. Pub/Sub, Bigtable
Answer 22:
D
Notes/Hint 22:
Pub/Sub is able to handle the ingestion, and Bigtable is a great solution for time series data.
Question 23: You have an App Engine application running in us-east1. You’ve noticed 90% of your traffic comes from the West Coast. You’d like to change the region. What’s the best way to change the App Engine region?
A. Use the gcloud app region set command and supply the name of the new region.
B. Contact Google Cloud Support and request the change.
C. From the console, under the App Engine page, click edit, and change the region drop-down.
D. Create a new project and create an App Engine instance in us-west2.
Question 24: You’ve uploaded some static web assets to a public storage bucket for the developers. However, they’re not able to see them in the browser due to what they called “CORS errors”. What’s the easiest way to resolve the errors for the developers?
A. Advise the developers to adjust the CORS configuration inside their code.
B. Use the gsutil cors set command to set the CORS configuration on the bucket.
C. Use the gsutil set cors command to set the CORS configuration on the bucket.
D. Use the gsutil set cors command to set the CORS configuration on the object.
Answer 24:
B
Notes/Hint 24:
CORS settings are made to a bucket, not an object.. You can set the CORS configuration on the bucket allowing the objects to be viewable from the required domains.
Question 25: You’ve uploaded some PDFs to a public bucket. When users browse to the documents, they’re downloaded rather than viewed in the browser. How can we ensure that the PDFs are viewed in the browser?
A. This is a browser setting and not something that can be changed.
B. Use the gsutil set file-type pdfcommand.
C. Set the Content metadata for the object to “application/pdf”.
D. Set the Content-Type metadata for the object to “application/pdf”.
Question 26: You’ve been tasked with getting all of your team’s public SSH keys onto all of the instances of a particular project. You’ve collected them all. With the fewest steps possible, what is the simplest way to get the keys deployed?
A. Use the gcloud compute ssh command to upload all the keys
B. Format all of the keys as needed and then, using the user interface, upload each key one at a time.
C. Add all of the keys into a file that’s formatted according to the requirements. Use the gcloud compute project-info add-metadata command to upload the keys.
D. Add all of the keys into a file that’s formatted according to the requirements. Use the gcloud compute instances add-metadata command to upload the keys to each instance
Answer 26:
C
Notes/Hint 26:
This will upload the keys as project metadata which allows SSH access to the user’s with uploaded keys
Question 27: What must you do before you create an instance with a GPU? ( Pick at least 2)
A. You must only select the GPU driver type. The correct base image is selected automatically.
B. You must select which boot disk image you want to use for the instance.
C. Nothing. GPU drivers are automatically included with the boot disk images.
D. You must make sure the selected image has the appropriate GPU driver is installed
Question 30: Your security team has been reluctant to move to the cloud because they don’t have the level of network visibility they’re used to. Which feature might help them to gain insights into your Google Cloud network?
A. Routes
B. Subnets
C. Flow Logs
D. Firewall rules
Answer 30:
C
Notes/Hint 30:
Flow logs are great for gaining insights into what’s happening on a network. They provide a sample of the flows to and from instances.
Question 31: You’re in charge of setting up a Stackdriver account to monitor 3 separate projects. Which of the following is a Google best practice?
A. Use the existing project with the least resources as the host project for the Stackdriver account.
B. Use the existing project with the most resources as the host project for the Stackdriver account.
C. Create a new, empty project to use as the host project for the Stackdriver account.
D. Use one of the existing projects as the host project for the Stackdriver account.
Question 32: You’re attempting to set up a File based Billing Export. Which of the following components are required?
A. A Cloud Storage bucket.
B. A BigQuery dataset.
C. A report prefix.
D. A Budget and at least one alert.
Answer 32:
A and C
Notes/Hint 32:
A cloud storage bucket is required in order to have a location for the files to be exported to. A report prefix is the portion of the file name that’s appended to each file.
Question 33: You’ve installed the Google Cloud SDK natively on your Mac. You’d like to install the kubectl component via the Google Cloud SDK. Which command would accomplish this?
A. sudo apt-get install kubectl
B. gcloud components install kubectl
C. pip install kubectl
D. brew install kubectl
Answer 33:
B
Notes/Hint 33:
For Windows and Mac, you can use the built-in component manager.
Question 34: You’re attempting to set the default Compute Engine zone with the Cloud SDK. Which of the following commands would work?
A. gcloud config set compute/zone us-east1-c
B. gcloud set compute\zone us-east1
C. gcloud set compute/zone us-east1
D. gcloud config set compute\zone us-east1
Answer 34:
A
Notes/Hint 34:
gcloud config set compute/zone us-east1-c works perfectly
Question 35: You’ve been hired as a Cloud Engineer for a 2-year-old startup company. Recently they’ve had a bit of turn over, and several engineers have left the company to pursue different projects. Shortly after one of them leaves, it is found that a core project seems to have been deleted. What is the most likely cause for of the project’s deletion?
A. You’ve been the victim of the latest malware that deletes one project per hour until you pay them to stop.
B. One of the engineers intentionally deleted the project out of spite.
C. The project was created by one of the engineers and not attached to the organization.
D. A failed attempt to pay the bill resulted in Google deleting the project.
Question 36: You’re using Stackdriver to set up some alerts. You want to reuse your existing REST-based notification tools that your ops team has created. You want the setup to be as simple as possible to configure and maintain. Which notification option would be the best option?
A. Use a Slack bot to listen for messages posted by Google.
B. Send it to an email account that is being polled by a custom process that can handle the notification.
C. Send notifications via SMS and use a custom app to forward them to the REST API.
D. Webhooks
Answer 36:
D
Notes/Hint 36:
Webhooks would allow you to easily send the notification to an HTTP(S) endpoint. Given the above scenario, this is the best option for something custom.
Question 37: A member of the finance team informed you that one of the projects is using the old billing account. What steps should you take to resolve the problem?
A. Submit a support ticket requesting the change.
B. Go to the Billing page, locate the list of projects, find the project in question and select Change billing account. Then select the correct billing account and save.
C. Go to the Project page; expand the Billing tile; select the Billing Account option; select the correct billing account and save.
D. Delete the project and recreate it with the correct billing account.
Answer 37:
B
Notes/Hint 37:
Go to the Billing page, locate the list of projects, find the project in question and select Change billing account. Then select the correct billing account and save.
Question 38: You’re using a self-serve Billing Account to pay for your 2 projects. Your billing threshold is set to $1000.00 and between the two projects you’re spending roughly 50 dollars per day. It has been 18 days since you were last charged.Given the above data, when will you likely be charged next?
A. On the first day of the next month.
B. In 2 days when you’ll hit your billing threshold.
C. On the thirtieth day of the month.
D. In 12 days, making it 30 days since the previous payment.
Answer 38:
B
Notes/Hint 38:
With Self-serve, you pay when you hit the billing threshold or every 30 days; whichever happens first. Given the scenario assumes $50 per day, you’ll hit the spending threshold in 2 more days.
Question 39: You have 3 Cloud Storage buckets that all store sensitive data. Which grantees should you audit to ensure that these buckets are not public?
A. allUsers
B. allAuthenticatedUsers
C. publicUsers
D. allUsers and allAuthenticatedUsers
Answer 39:
D
Notes/Hint 39:
Either of these tokens represents public users. allAuthenticatedUsers represents a user with a Google account. They don’t need to be part of your organization. Neither token should be used to grant permissions unless the bucket is truly public.
[appbox appstore 1574395172-iphone screenshots]
Question 40: You’ve been asked to help onboard a new member of the big-data team. They need full access to BigQuery. Which type of role would be the most efficient to set up while following the principle of least privilege?
A. Primitive Role
B. Custom Role
C. Managed Role
D. Predefined Role
Answer 40:
D
Notes/Hint 40:
Predefined roles would work great for this use case because they’re specific to resources. BigQuery has several predefined roles including a “BigQuery Admin” role.
Question 41: Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention. What should you do?
A. Create an export to the sink that saves logs from Cloud Audit to BigQuery.
B. Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.
C. Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.
D. Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.
Question 42: You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?
A. Create a Cloud Memorystore for Redis instance with 32-GB capacity.
B. Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.
C. Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.
D. Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.
Answer 42: B
Question 43: You are hosting an application on bare-metal servers in your own data center. The application needs access to Cloud Storage. However, security policies prevent the servers hosting the application from having public IP addresses or access to the internet. You want to follow Google-recommended practices to provide the application with access to Cloud Storage. What should you do?
A. 1. Use nslookup to get the IP address for storage.googleapis.com. 2. Negotiate with the security team to be able to give a public IP address to the servers. 3. Only allow egress traffic from those servers to the IP addresses for storage.googleapis.com.
B. 1. Using Cloud VPN, create a VPN tunnel to a Virtual Private Cloud (VPC) in Google Cloud. 2. In this VPC, create a Compute Engine instance and install the Squid proxy server on this instance. 3. Configure your servers to use that instance as a proxy to access Cloud Storage.
C. 1. Use Migrate for Compute Engine (formerly known as Velostrata) to migrate those servers to Compute Engine. 2. Create an internal load balancer (ILB) that uses storage.googleapis.com as backend. 3. Configure your new instances to use this ILB as proxy.
D. 1. Using Cloud VPN or Interconnect, create a tunnel to a VPC in Google Cloud. 2. Use Cloud Router to create a custom route advertisement for 199.36.153.4/30. Announce that network to your on-premises network through the VPN tunnel. 3. In your on-premises network, configure your DNS server to resolve *.googleapis.com as a CNAME to restricted.googleapis.com.
Answer 43: C
Question 44: You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What should you do?
A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic. 2. Call your application on Cloud Run from the Cloud Function for every message.
B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run. 2. Create a Cloud Pub/Sub subscription for that topic. 3. Make your application pull messages from that subscription.
C. 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint.
D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal. 2. Create a Cloud Pub/Sub subscription for that topic. 3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.
Answer 44: D
Question 45: You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs. What should you do?
A. Deploy the container on Cloud Run.
B. Deploy the container on Cloud Run on GKE.
C. Deploy the container on App Engine Flexible.
D. Deploy the container on GKE with cluster autoscaling and horizontal pod autoscaling enabled.
Answer 45: B
Question 46: Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do?
A. Link the acquired company’s projects to your company’s billing account.
B. Configure the acquired company’s billing account and your company’s billing account to export the billing data into the same BigQuery dataset.
C. Migrate the acquired company’s projects into your company’s GCP organization. Link the migrated projects to your company’s billing account.
D. Create a new GCP organization and a new billing account. Migrate the acquired company’s projects and your company’s projects into the new GCP organization and link the projects to the new billing account.
Question 47: You built an application on Google Cloud that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do?
A. Add the support team group to the roles/monitoring.viewer role
B. Add the support team group to the roles/spanner.databaseUser role.
C. Add the support team group to the roles/spanner.databaseReader role.
D. Add the support team group to the roles/stackdriver.accounts.viewer role.
Answer 47: B
Question 48: For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Cloud Logging agent on all the instances. You want to minimize cost. What should you do?
A. 1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances. 2. Update your instancesג€™ metadata to add the following value: logs-destination: bq://platform-logs.
B. 1. In Cloud Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink. 2. Create a Cloud Function that is triggered by messages in the logs topic. 3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.
C. 1. In Cloud Logging, create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.
D. 1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset. 2. Configure this Cloud Function to create a BigQuery Job that executes this query: INSERT INTO dataset.platform-logs (timestamp, log) SELECT timestamp, log FROM compute.logs WHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) 3. Use Cloud Scheduler to trigger this Cloud Function once a day.
Answer 48: C
Question 49: You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do?
A. Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.
B. Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.
C. With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.
D. In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.
Question 50: You are building an application that will run in your data center. The application will use Google Cloud Platform (GCP) services like AutoML. You created a service account that has appropriate access to AutoML. You need to enable authentication to the APIs from your on-premises environment. What should you do?
A. Use service account credentials in your on-premises application.
B. Use gcloud to create a key file for the service account that has appropriate permissions.
C. Set up direct interconnect between your data center and Google Cloud Platform to enable authentication for your on-premises applications.
D. Go to the IAM & admin console, grant a user account permissions similar to the service account permissions, and use this user account for authentication from your data center.
Yes, Google App Engine(GAE) , a fully managed PaaS is 100% worthy if :
you want ready and quick platform to build web applications and mobile backends on Cloud scale with very low cost start
want to get rid of the burden managing and provisioning Infrastructure, application security and scale
are fine with almost no control over web server and application software like Database, File storage, Messaging mechanism. You have to live with what GAE offers and choose from choices available. Forget about customization!
can live with fixed set of language runtimes like Node.js, Java, Ruby, C#, Go, Python, ….
Google App Engine is PaaS platform (Platform as a service) that is used to deploy large-scale web and mobile apps. So, the sites are:Disney
Snapchat, YouTube, Accenture, Practo, Samba Tech, Buddy, Kam Bam, Coco Cola, The New York Times, Stack
It is one of the most trusted cloud platform used by top companies. We will get to see many more sites deploying Google App Engine for their web & app hosting.
Well, I believe it because I met and discussed it with some of the Google engineers responsible for that area. And I am not special in that respect: it’s not a secret. Here’s the missing link: Google runs KVM in a container. To be crystal clear, a container is not an actual Linux construct. There is no Linux system call you can make to create a container. Instead, it is the term we give to the usage of Linux primitives like namespaces and cgroups to partition applications into their own Linux-level virtual compute space. Except we don’t call it that, we call it a container. So, at the lowest level, Google’s infrastructure schedules containers. To create a virtual machine, google runs KVM in one of those containers. So the document you link to is absolutely valid *and* KVM runs in a containe(more)
No, but to be honest, I think that’s what their gaming system is for. Reverse marketing. They don’t expect it to be a hit, but if they’re almost good enough for gaming, then they’re certainly good enough for me. They’re not aiming for gamers, but everyone else. There is definitely a market for public VDI. I was working on that concept ten years ago, but I didn’t have the resources to pull it off. Back then, watching Youtube videos on the client was not feasible. These days, you could probably kill the whole PC industry if you had the resources. If Google develops something like JackPC that is able to connect to their Stadia and provide a VM, I would recommend it to my father, but I wouldn’t use it, because I still have a long life to live and I’m not giving it to Google. But if they made i(more)
Google runs Linux on its hardware (AKA “Linux on bare metal”). As part of that Linux image, it has its own Linux container implementation based on cgroups and namespaces. In Google Cloud platform, it then runs KVM inside a Linux container, and the VMs run on top of KVM. So the hierarchy is VM->KVM->Linux->Bare metal(more)
i would suggest you to read this document thoroughly, so that you can understand logging into Compute instances is not that tedious… 🙂 Connecting to instances using advanced methods | Compute Engine Documentation | Google Cloud(more)
Lets have two variables (although they can be more): ease of administration, constraints of use. App Engine: from your side there is almost no administration, you write code (with somewhat limited possibilities), upload and basically don’t have other major concerns (well maybe how to lower your bills if your app gets popular)) all the rest (storage, scaling, installing programs etc.) handles app engine Compute Engine is virtual machine with preinstalled OS and you can do with it whatever you want. That means you have to install all programs by yourself but you are not limited with what can you do with it. Container Engine is another level above Compute Engine, i.e. it’s cluster of several Compute Engine instances which can be centrally managed. There is also one level between GAE and GCE:(more)
Both of them have almost the same price but they have different type of discounts. For instance AWS has “Reserved Instance” discount model for 1 or 3 year purchase. You have to pay almost 1/3 of the period as pre-paid and you’ll get %30–60 discounts depends on period you choose and EC2 instance type you have. Google Cloud has a monthly discount model and it applies automatically if you use a compute engine more than 10 days in a month. If you run the compute instance during the month you may have %30 discount without pre-pay anything. So both of them have discounts but in a different financial payment model. As an alternative, you can checkout DigitalOcean for the affordable prices.(more)
They’re three different approaches to running services on virtual machines. AppEngine is designed around automatic scaling of services. There’s actually two different flavors of AppEngine entirely : the “standard environment,” which is a sandbox, and the “flexible environment,” which is a more traditional (though still not traditional!) VM running in a Docker container. Both versions are designed to automatically spawn more instances of your service in response to increases in load, and isolate you from a lot of hard SRE problems. Compute Engine is just plain old virtual machines. If you want to run an instance of a VM with a certain amount of memory and hard drive space, running under a given version of Linux, and not have to worry about physical equipment, Compute Engine is for you. (Mor(more)
I do not understand why the question asks about both EC2/Compute Engine and Cloud-Storage/S3. Cloud-Storage/S3 is used to serve static websites. The EC2/computer engine is typically used to serve dynamic content (However, it can serve static websites too). I would try and figure out which one of these suits your use better. In both the cases, however, GCP is cheaper (You also get credits to use it free for one year) – they even have a page where you can calculate how much you save moving from AWS to GC → Google Cloud Platform Pricing Calculator | Google Cloud Platform | Google Cloud (The only case where I have seen GCP is more expensive is when it comes to hosting proprietary licensed DBs like MS SQL).(more)
We started offering our hadoop service on GCE. We ran hadoop workloads with a root persistent disk(storage over network) and an additional persistent disk of size 500 GB. Consistently, we observed that the performance is better than other leading cloud providers where we used local disks of the instance. Few months back, GCE was offering scratch disks. They decided to replace scratch disks with persistent disk when they went GA. This fact clearly shows that there was enough confidence, that persistent disks were performing well compared to scratch disks. (if thats not the case, Google would not have made this bold move and continued offering scratch disks also like AWS) This performance must partly be attributed to their networking stack. Its considered the best out there in the(more)
Google has been building and using its own private cloud since the start of the company. They have always been known for about setting the standards in many industries, and public cloud is what happening. For years, people would always wanted to use their cloud technology (Colossus, BigTable, GAE, etc..). Strategically, Google knows that if they focus more on providing and marketing their public cloud based on what they currently use, people who look up to them would see it as standard, and it’s all good for business. Another reason is, with recent acquisitions (for instance, Nest), Google realized that those successful startups they acquire use AWS more than GCP. Telling the existing development teams to migrate to GCP will disrupt the team (just like Microsoft’s acquisition of Minecraft(more)
I strongly suggest to move your installation to google app engine instead. It’s easy, it will leverage your maintenance costs, and it will auto scale when needed. As for cdn, you can host static files on google storage that is already managed with google cdn behind the scene. To go with WordPress on google app engine there are simple tutorials like this: GoogleCloudPlatform/php-docs-samples I did this setup many times with great success. I also wrote a small tutorial to speed up your wp installation with memcache (that comes as a free service in google app engine). giona69/wordpress-made-extremely-fast Good work!(more)
I just want to explain in a way that a person who don’t have any prior knowledge on containers and clusters should be able to understand what kubernetes is and what it does. First we understand why container. * Let’s say you want to gift a cycle to your kid on his birthday. Now if the cycle is delivered to you with parts separated and a manual that describes how to attach the parts. Well you may end up screwing things. * Instead what if the cycle itself is ready-made and packed in a container and delivered to your home address, with no manual intervention required? . Ain’t that awesome. * * The individual parts of cycle is the dependencies of the project which may work at one place and not the other. * * The cycle company is the developers hub, and the client here is the one using our product. * * To solve thi(more)
Indeed Kubernetes and Docker are two different things that are related to each other. Let’s have a look; After getting used to Docker, you realize that there should be ‘Docker run’ commands or something like that to run many containers across heterogeneous hosts. Here is when Kubernetes or k8s comes in. It solved many problems that Docker had. Kubernetes is based on Google’s container management system- Borg and language used is Go. It is a COE (Container Orchestration Environment) for Docker containers. The function of COE is to make it sure that application is launched and running properly. If in case a container fails, Kubernetes will spin up another container. It provides a complete system for running so many containers across multiple hosts. It has load balancer integrated and uses etc(more)
Kubernetes is a vendor-agnostic cluster and container management tool, open-sourced by Google in 2014. It provides a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. Above all, this lowers the cost of cloud computing expenses and simplifies operations and architecture. Kubernetes and the Need for Containers Before we explain what Kubernetes does, we need to explain what containers are and why people are using those. A container is a mini-virtual machine. It is small, as it does not have device drivers and all the other components of a regular virtual machine. Docker is by far the most popular container and it is written in Linux. Microsoft also has added containers to Windows as well, because they have become so popular. The bes(more)
Despite the little time that Kubernetes has in the market, this tool has become a reference in terms of the management and allocation of service packages (containers) within a cluster. Initially developed by Google, Kubernetes emerged as an open-source alternative to the Borg and Omega systems, being officially launched in 2015. What is Kubernetes? Kubernetes is an open-source tool also designated as an orchestrator, which is used to carry out the distribution and organization of workloads in the form of containers. This, in order to maintain the availability and accessibility of existing resources to customers, as well as stability when carrying out the execution of multiple services simultaneously. Through this action scheme, Kubernetes makes it possible for numerous servers of different typ(more)
There are a countless number of debates, discussions and social clatter talking about Kubernetes and Docker. Nevertheless, Kubernetes and Docker Swarm are not rivals! Both have their own pros and cons and can be used depending on your application requirements. Benefits & drawbacks of Kubernetes Benefits of Kubernetes: * Kubernetes is backed by the Cloud Native Computing Foundation (CNCF). * Kubernetes have an impressively huge community among container orchestration tools. Over 50,000 commits and 1200 contributors. * Kubernetes is an open source and modular tool that works with any OS. * Kubernetes provides easy service organization with pods (Start your Kubernetes journey to resilient and highly available deployments – Free consultation on Kubernetes) Drawbacks of Kubernetes * When doing it yourself, K(more)
If you already ‘know’ Docker containers, then spin up a Kubernetes system (Not as hard as you think – check out installing Minikube) read through the docs for Kubernetes and start trying out some of the capabilities for yourself. The (free) Katacoda is a browser-based learning platform has a number of ‘scenarios’ that run on pre-deployed Kubernetes system. Follow this link to Katacoda and then search for “Kubernetes.” Note that you can copy-paste your way through most of the exercises in a minute or two, learning is on you to read and understand what it is you are pasting. Online resources such as the “Awesome Kubernetes” or “Awesome Docker” lists (you do need to have some understanding of Docker to work with Kubernetes) will give you a pile of options – free and paid – to get into greater(more)
When Linux containers appeared at the time of LXC, a lot of people in the IT world saw them as something marvelous, they offered a way of packaging software with all their dependencies and running then in any other Linux machine. Much like virtual machines, but without the performance losses. But the truth was that they weren’t widely used, they required some plumbing to make them work, and there were no standard way to distribute the images. Then docker appeared, adding to existing container technologies a workflow for building and sharing images and a common interface to start containers. This came to popularize these technologies, but they weren’t still widely used for production systems, mainly because it was not so advantageus to have just another packaging system for production. And t(more)
There is no one way to compare because they are mostly different things. That said, I’ll first try and define the need for each one of these and link them together. Let’s start with the bottom of the stack. You need infrastructure to run your servers. What could you go with? You can use a VPS provider like DigitalOcean, or use AWS. What if, for some non-technical reason, you can’t use AWS? For instance, there is a legal compliance that states that the data I store and servers I run are in the same geography as the customers I serve, and AWS does not have a region for the same? This is where OpenStack comes in. It is a platform to manage your infrastructure. Think of it as an open source implementation of AWS which you can run on bare metal data centers. Next, we move up the stack. We want an(more)
Kubernetes (also known as K8s) is a production-grade container orchestration system. It is an open source cluster management system initially developed by three Google employees during the summer of 2014 and grew exponentially and became the first project to get donated to the Cloud Native Computing Foundation(CNCF). It is basically an open source toolkit for building a fault-tolerant, scalable platform designed to automate and centrally manage containerized applications. With Kubernetes you can manage your containerized application more efficiently. Kubernetes is a HUGE project with a lot of code and functionalities. The primary responsibility of Kubernetes is container orchestration. That means making sure that all the containers that execute various workloads are sc(more)
The basic idea of Kubernetes is to further abstract machines, storage, and networks away from their physical implementation. So it is a single interface to deploy containers to all kinds of clouds, virtual machines, and physical machines. Container Orchestration & Kubernetes Containers are virtual machines. They are lightweight, scalable, and isolated. The containers are linked together for setting security policies, limiting resource utilization, etc. If your application infrastructure is similar to the image shared below, then container orchestration is necessary. It might be Nginx/Apache + PHP/Python/Ruby/Node.js app running on a few containers, communicating with the replicated database. Container orchestration wi(more)
As seen in the following diagram, Kubernetes follows client-server architecture. Wherein, we have master installed on one machine and the node on separate Linux machines. The key components of master and node are defined in the following section. Kubernetes – Master Machine Components Following are the components of Kubernetes Master Machine. etcd It stores the configuration information which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server as it may have some sensitive information. It is a distributed key value Store which is accessible to all. API Server Kubernetes is an API server which provides all the operation on cluster usi(more)
Kubernetes service discovery find services through two approaches: 1. Using the environment variables that use the same conventions as those created by Docker links. 2. Using DNS to resolve the service names to the service’s IP address. Environment Variables Kubernetes injects environment variables for each service and each port exposed by the service. This makes it easy to deploy containers that use Docker links to find their dependencies. For example, if we are exposing a RabbitMQ service, we can locate it using the RABBITMQ_SERVICE_SERVICE_HOST and RABBIT_MP_SERVICE_SERVICE_PORTvariables. Other environment variables are also exposed to support this. The easiest way to find out what environment variables are exposed are(more)
Docker is open source tool has been designed to create applications as small container on any machine. By using docker development , deployment is too easy is for developers . We can say this are very light-weight in size which includes minimal OS and your application . In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application. Kubernets : Kubernetes is a powerful system, developed by Google, for managing containerized applications in a clustered e(more)
Container cluster management system is called Kubernetes. After getting used to Docker, you realize that there should be ‘Docker run’ commands or something like that to run many containers across heterogeneous hosts. Here is when Kubernetes comes in. It provides a complete system for running different containers across multiple hosts. Kubernetes is based on Google container management system Borg and language used is Go.Basically, Google uses three languages; 1. C/C++ 2. Java 3. Python C and C++ might be little tough for new users. Java is less attractive as compared to Go for Kubernetes because of its heavy runtime download. Python is great but dynamic typing of Python is challenging for system software. Go is the best choice as it has great sets of system libraries. It has fast testing and building too(more)
Hi there, I believe container orchestration is one of the best features of Kubernetes. I will tell you why? I am sharing a section of my recently posted article on Level Up. For complete article, please visit : The Kubernetes Bible for Beginners & Developers – Level Up So here is my answer : How Kubernetes Solves the Problem? After discussing the deployment part of Kubernetes, it is necessary to understand the importance of Kubernetes. Container Orchestration & Kubernetes Containers are virtual machines. They are lightweight, scalable, and isolated. The containers are linked together for setting security policies, limiting resource utilization, etc. If your application infrastructure is similar to the image shared below, then container orchestration is necessary. It might be Nginx/Apache + PHP/(more)
Hi, I found this cheat sheet on Kubernetes. Kubernetes kubectl CLI Cheat Sheet This cheat sheet encloses first-aid commands to configure the CLI, manage a cluster, and gather information from it. On downloading the cheat sheet, you will find out how to:Create, group, update, and delete cluster resources Debug Kubernetes pods—a group of one or more containers with shared storage/network and a specification for running the containers Manage config maps, a primitive to store a pod’s configuration, and secrets, a primitive to store such sensitive data as passwords, keys, certificates, etc. You will learn how to use Helm—a package manager to define, install, and upgrade complex Kubernetes apps. Moreover, here you can find the Kubernetes training courses – Custom Hands-On IT Training Courses… Plus -(more)
Both Kubernetes and Docker are DevOps tools. Docker was started in 2013 and is developed by Docker, Inc. Kubernetes was introduced as a project at Google in 2014, and it was a successor of Google Borg. Kubernetes can run without docker, and docker can run without kubernetes. But kubernetes has great benefits in running along with docker. What is Kubernetes Kubernetes is a container management system developed by Google. It is an open-source, portable system for automatic container deployment and management. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In practice, Kubernetes is most commonly used alongside Docker for better control and implementation of containerized applications. Features of Kubernetes * Automates various manual proces(more)
Yes and no. Especially for Kubernetes (which is not THAT hard, but has a steep learning curve in the beginning), I doubt that there is any certification that can tell you stuff you cannot learn for free. You can set up a Kubernetes cluster on DO for $20/month or even on you laptop to actually try out things. Create a few Helm charts for your pet applications and you have a good working knowledge of Kubernetes. BUT: How can an employer judge your level of knowledge? And this is where certifications get interesting. So basically, you are trading money for an increased chance of employment, all other things equal. Furthermore, at a certain size of projects, customers require their suppliers to have a certain number of people certified in the relevant technologies — so that they can rest assure(more)
This is a good question. I would like to say that Borg and Kubernetes both have the same kind of tasks. But Google is promoting Kubernetes for now. As such, it offering good features as well. The most important thing of all, Kubernetes has an active online community. The members of this community meet-up online as well as in person, in major cities of the world. An international conference “KubeCon” has proved to be a huge success. There is also an official Slack group for Kubernetes. Major cloud providers like Google Cloud Platform, AWS, Azure, DigitalOcean, etc also offer their support channels. For more details on Kubernetes, please visit my articles : https://www.level-up.one/kubernetes-bible-beginners/ How Does The Kubernetes Networking Work? : Part 1 – Level Up How Does The Kubernetes Ne(more)
Kubernetes is infrastructure abstraction for container manipulation. In Kubernetes there are many terms that conceptualize the execution environment. A pod is the smallest unit deployable in kubernetes. You can see it as an application that runs one container or multiple that work together. Pods have volumes, memory and networking requirements. Pods have a unique Id and can die at any minute so kubernetes provides a higher hierarchy abstraction called Service. A Service is a logical set of pods that are permanent in the cluster and offer functionality. Pods are accesible through the service names in the network of the cluster. When a pod dies, kubernetes automatically runs a new pod of the service (depending on replica configuration) to keep the service offering functionality. There are man(more)
Kubernetes’ increased adoption is showcased by a number of influential companies which have integrated the technology into their services. Let us take a look at how some of the most successful companies of our time are successfully using Kubernetes. Tinder’s move to Kubernetes Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. What did they do? Kubernetes – Yes, the answer is Kubernetes. Tinder’s engineering team solved interesting challenges to migrate 200 services and run a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running containers. Reddit’s Kubernetes story Reddit is one of the top busiest sites in the world. Kubernetes forms the core of Reddit’s internal Infrastructure. From many years, the Reddit infrastructure tea(more)
Here is a way you could convince him. Docker is dead. It’s not technically dead, but in reality, it’s a walking zombie. I’ll explain why. AWS is one of the best platforms for infrastructure and there is GCE and Azure, but AWS is the standard, the most capable platform from all the cloud architectures. AWS is integrating Kubernetes into it’s system and you might ask what are the benefits and why would it do that. Kubernetes is basically a competitor to AWS. It allows you to write infrastructure using YAML files and deploy them on a cluster. The only drawback right now is that you cannot provision servers using Kubernetes because it sits at a higher level in the abstraction stack. The servers are below it. However, with EKS (elastic kubernetes service). AWS has integrated all sorts of primativ(more)
If the developer put together a working solution then keep using it, thank them for the effort, and provide some private coaching on how to get buy-in so things go more smoothly in the future. Startups spawn serious problems that don’t end up on the roadmap as they should, and you’re better off with people taking initiative then fixing them. Otherwise the stake holders need to decide on a containerization solution, preferably coming to that conclusion by themselves or at least believing they did. That’s probably Kubernetes (from Google which knows how to build and run things) and docker where you already have one enthusiastic engineer willing to own the project, although they should be able to provide reasonable arguments on why that’s the best option for containerization and deployment. Peo(more)
Kubernetes is meant to simplify things and this article is meant to simplify Kubernetes for you! Kubernetes is a powerful open-source system that was developed by Google. It was developed for managing containerized applications in a clustered environment. Kubernetes has gained popularity and is becoming the new standard for deploying software in the cloud. Learning Kubernetes is not difficult (if the tutor is good) and it offers great power. The learning curve is a little steep. So let us learn Kubernetes in a simplified way. The article covers Kubernetes’ basic concepts, architecture, how it solves the problems, etc. What Is Kubernetes? Kubernetes offers or in fact, it itself is a system that is used for running and coordinating applications across numerous machines. The system manages the(more)
Kubernetes and Docker are two different tools used for DevOps. Let me explain each in brief. Kubernetes is an open-source platform used for maintaining and deploying a group of containers. In practice, Kubernetes is most commonly used alongside Docker for better control and implementation of containerized applications. Docker is a tool that is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments. Features of docker – Multiple containers run on the same hardware High productivity Maintains isolated applications Quick and easy configuration Differences between Kubernetes and Docker 1. In Kubernetes, applications are deployed as a combination of pods, deployments, and services. In Docker, applications are deployed i(more)
Kubernetes is built in three layers with each higher layer hiding the complexity found in a lower layer -Application Layer(Pool and Services), Kubernetes Layer and Infrastructure Layer. Pods are a part of Kubernetes layer. A pod is one or more containers controlled as a single application It encapsulates application containers, storage resources, a unique network ID and other configuration on how to run the containers A Pod represents a group of one or more application containers bundled up together and are highly scalable If a pod fails, Kubernetes automatically deploys new replicas of the pod to the cluster Pods provide two different types of shared resources -networking and storage You can also get a good understanding of content quality by watching Simplilearn’s youtube videos. Here are some(more)
Kubernetes, also sometimes called K8S (K – eight characters – S), is an open source orchestration framework for containerized applications that was born from the Google data centers.(more)
Docker, absolutely learn that first. Docker Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package. And here comes the race between choosing an orchestration tool : Overview of Kubernetes Kubernetes is based on years of Google’s experience of running workloads at a huge scale in production. As per Kubernetes website, “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.” Overview of Docker Swarm Docker swarm is Docker’s own container’s orchestration. It uses the standard Docker API and networking, making it easy to drop into(more)
A node is the smallest unit of hardware in Kubernetes, also known as a minion. It is a representation of a single machine in the cluster. It is a physical machine in a data center or virtual machine hosted on a cloud provider like Google Cloud Platform. Each node has the services required to run a pod and is managed by the master components in Kubernetes architecture. The services given by a Kubernetes Node include the container runtime(Docker), Kubelet, and Kube-proxy. To know more about Node in Kubernetes, watch this video on Kubernetes Architecture: Hope this helps!(more)
They’re both good technologies with huge opportunities and potential ahead) Docker is overhyped for its relative youth, and is really a moderate set of wrapper capabilities around the Linux kernel. Operational understanding is scarse and conflicting. Requires a lot of deep street knowledge to use effectively in production. Lots of subtle performance and reliability challenges with eg. Networking, storage. Often subtle breaking changes beteeen releases. Installing and operating Kubernetes is not for the faint of heart. Assumes you can “bring your own cluster”. Pace of change and improvement on core k8s is astounding (good and bad). Using Kubernetes is relatively white box, ie. you really need to know what’s going on under the covers to a degree especially if you’re not using GKE.(more)
Used on GCP and Physical ServersA Kubernetes cluster is a group of ‘machines’ that are either on the same network segment or set up to communicate with each other over the network with low latency, and run Kubernetes software. Kubernetes software runs as a ‘service’ or ‘daemon’ on each machine in the cluster and this causes the host machine to either act as a ‘master’ or a ‘slave’ node within the cluster. During the Kubernetes cluster set up process, the master is created first and toward the end of the install process a connection command is displayed or logged to the system. This should then be run on each additional node once the base Kubernetes software has been installed. Some ‘magic’ then takes place and the new node links up with the master node to form a logical cluster. Commands can then be run on the master node t(more)
I think containers are the model of potential delivery now. We make packaging an application with their required infrastructure much easier. Tools like Docker provide containers, but also software are needed to handle items such as replication, failures and APIs for automating deployment on multiple machines. At the beginning of 2015, the status of clustering platforms such as Kubernetes and Docker Swarm was highly unstable. We tried to use them and began with Docker Swarm. Amid the news in recent weeks, several businesses have purchased container or micro-service firms to boost their portfolio for what lies ahead. What is this a really important topic now ? Amid the news in recent weeks, several businesses have purchased container or micro-service firms to boost their portfolio for what lie(more)
Lets forget all about technical stuff, lets discuss this in a way that a non-technical guy understands. * You are owner of a building and you have 5 spots where people can enter your building and you want 5 security guards guarding the spots. All good till now. * * Now consider one of the guard was out of service for 2 hours due to some personal reasons. Now as a building owner its your responsibility to guard or employ another guard replacing the existing. Do you like to be manually interrupted from your task to look after who is out and whom to replace. * * No, no one likes to be. Now the solution could be, go to a third party vendor who provides 24*7 availability of the guards. Its the responsibility of the vendor to make 24*7 availability based on the configuration set(in this case guards guarding(more)
While researching for a project, I looked into all of the available books on Kubernetes. Here’s a quick roundup. (Feel free to suggest more!) * Golden Guide to Kubernetes Application Development This book’s for web app developers who just want a short, sharp guide to grok Kubernetes. It’s also really great for people trying to get their CKAD certification. (Disclaimer: I wrote this. Yeah, this is one of those Quora answers… but I hope it’s still useful.) * The Kubernetes Book Probably the most popular and established book on Kubernetes. It’s great for new developers trying to learn Kubernetes. The author is known for his video courses as well. * Kubernetes: Up and Running Definitely written by the most authoritative authors of any book here. Kelsey Hightower is a Google dev advocate for Kubernetes(more)
It is indeed possible to use Kubernetes with out Docker. The Kubernetes community has long recognized the problem with being tied to Docker’s quasi-proprietary (and somewhat arbitrarily developed) container runtime. Early on there was support for an alternative runtime called rkt (pronounced like rocket). However, going down the path of creating separate solutions for any and every new container runtime that might get developed would be a lot of work and a bit like reinventing the wheel for each runtime. To break free of the Docker runtime constraint, the CRI (Container Runtime Interface) that allows you to use other container runtimes (e.g. ContainerD, CRI-O, etc.). The CRI plugin is a shim sits between the Kubernetes kubelet and container runtime and acts as a universal translator. Read more…
I’m not sure how to explain Kubernetes to a 10-year-old. Yet when I’m allowed to expand to older people who are not technology savvy I can come up with an example which might resonate. It will inside my company: I will use the analogy of our call center. My company services some 2 million people, we manage their pensions and the necessary administration. Every year we send out the latest status of the pensions to the participants, and sure enough people will follow up. Many follow up online – the pension fund websites – yet there is a significant number who call or send an e-mail. We measure the amount of outstanding messages, as well as the amount of unanswered calls (I recall the service level is at 80% answered within 10 seconds). These are displayed on monitors so those who work in the(more)
Assuming a basic understanding of Docker and containers, I’ll describe the Kubernetes specifics. This is from a general user point of view. Kubelet: A process which runs on each node in the cluster. Kubelet talks to the master server and gets a list of containers to run and then runs, manages, and reports container status back to the master server. Pod: The primary unit of Kubernetes scheduling and management. A Pod is list of containers that are always run together on one node. The containers in a pod share an IP address and a network stack, but are otherwise isolated from each other. Container: A Docker container, it has an isolated process space, can expose ports, can define environment variables and a run command. Read more ….
Kubernetes has a strong feature set for microservice architectures. Things like service discovery, automatic failover, rescheduling, and support for overlay networks make it the best choice in dynamic environments with many small, frequently changing applications tied together. If your application needs to start hundreds of containers quickly and will terminate them just as quickly, then Kubernetes is a good option. The converse of this is that it is not as well designed for more static, highly efficient workloads. Containerization is great for flexibility, but doesn’t come for free. There is a performance penalty for using it, somewhere between a few to high single digit percentage penalty, depending on the type of operations. Read more ….
DATA AND ANALYTICS BigQuery: Data warehouse/analytics BigQuery BI Engine: In-memory analytics engine BigQuery ML: BigQuery model training/serving Cloud Composer: Managed workflow orchestration service Cloud Data Fusion: Graphically manage data pipelines Cloud Dataflow: Stream/batch data processing Cloud Dataprep: Visual data wrangling Cloud Dataproc: Managed Spark and Hadoop
NETWORKING Carrier Peering: Peer through a carrier Direct Peering: Peer with GCP Dedicated Interconnect: Dedicated private network connection Partner Interconnect: Connect on-prem network to VPC Cloud Armor: DDoS protection and WAF Cloud CDN: Content delivery network Cloud DNS: Programmable DNS serving Cloud Load Balancing: Multi-region load distribution/balancing Cloud NAT: Network address translation service Cloud Router: VPC/on-prem network route exchange (BGP) Cloud VPN (HA): VPN (Virtual private network connection) Network Service Tiers: Price vs performance tiering Network Telemetry: Network telemetry service Traffic Director: Service mesh traffic management Google Cloud Service Mesh: Service-aware network management Virtual Private Cloud: Software defined networking VPC Service Controls: Security perimeters for API-based services Network Intelligence Center: Network monitoring and topology
GOOGLE MAPS PLATFORM Directions API: Get directions between locations Distance Matrix API: Multi-origin/destination travel times Geocoding API: Convert address to/from coordinates Geolocation API: Derive location without GPS Maps Embed API: Display iframe embedded maps Maps JavaScript API: Dynamic web maps Maps SDK for Android: Maps for Android apps Maps SDK for iOS: Maps for iOS apps Maps Static API: Display static map images Maps SDK for Unity: Unity SDK for games Maps URLs: URL scheme for maps Places API: Rest-based Places features Places Library, Maps JS API: Places features for web Places SDK for Android: Places features for Android Places SDK for iOS: Places feature for iOS Roads API: Convert coordinates to roads Street View Static API: Static street view images Street View Service: Street view for JavaScript Time Zone API: Convert coordinates to timezone
G SUITE (WORKSPACE) PLATFORM Admin SDK: Manage G Suite resources AMP for Email: Dynamic interactive email Apps Script: Extend and automate everything Calendar API: Create and manage calendars Classroom API: Provision and manage classrooms Cloud Search: Unified search for enterprise Docs API: Create and edit documents Drive Activity API: Retrieve Google Drive activity Drive API: Read and write files Drive Picker: Drive file selection widget Email Markup: Interactive email using schema.org G Suite Add-ons: Extend G Suite apps G Suite Marketplace: Storefront for integrated applications Gmail API: Enhance Gmail Hangouts Chat Bots: Conversational bots in chat People API: Manage user’s Contacts Sheets API: Read and write spreadsheets Slides API: Create and edit presentations Task API: Search, read & update Tasks Vault API: Manage your organization’s eDiscovery
MIGRATION TO GCP BigQuery Data Transfer: Service Bulk import analytics data Cloud Data Transfer: Data migration tools/CLI Google Transfer Appliance: Rentable data transport box Migrate for Anthos: Migrate VMs to GKE containers Migrate for Compute Engine: Compute Engine migration tools Migrate from Amazon Redshift: Migrate from Redshift to BigQuery Migrate from Teradata: Migrate from Teradata to BigQuery Storage Transfer Service: Online/on-premises data transfer VM Migration: VM migration tools Cloud Foundation Toolkit: Infrastructure as Code templates
Answer these questions to validate your basic knowledge of GCP:
As a prerequisite, here are the top 20 questions will help you familiarize yourself with the Google Cloud Platform.
1) What is GCP? 2) What are the benefits of using GCP? 3) How can GCP help my business? 4) What are some of the features of GCP? 5) How is GCP different from other clouds? 6) Why should I use GCP? 7) What are some of GCP’s strengths? 8) How is GCP priced? 9) Is GCP easy to use? 10) Can I use GCP for my personal projects? 11) What services does GCP offer? 12) What can I do with GCP? 13) What languages does GCP support? 14) What platforms does GCP support? 15) Does GPC support hybrid deployments?
16) Does GPC support on-premises deployments?
17) Is there a free tier on GPC ? 18) How do I get started with using GCP
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
Category: Marketplace Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions. References: [AWS]:AWS Marketplace [Azure]:Azure Marketplace [Google]:Google Cloud Marketplace Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
Category: Big data and analytics: Data warehouse Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization References: [AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch [Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning [Google]:Cloud DataProc, Machine Learning, Cloud Datalab Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.
Category: Serverless Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. References: [AWS]:AWS Lambda [Azure]:Azure Functions [Google]:Google Cloud Functions Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
Category:Caching Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. References: [AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.) [Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.) [Google]:Memcache (In-memory key-value store, originally intended for caching) Tags:#Redis, #Memcached <Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
Category: Enterprise application services Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices. References: [AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index) [Azure]:Office 365 [Google]:G Suite Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
Category: Management Description: A unified management console that simplifies building, deploying, and operating your cloud resources. References: [AWS]:AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources) [Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health [Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
Enables both Speech to Text, and Text into Speech capabilities. The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs. Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Computer Vision: Extract information from images to categorize and process visual data. Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.
Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.
Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers. AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform. Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database. Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Migration of database schema and data from one database format to a specific database technology in the cloud. AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
A cloud service for collaborating on code development. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services. The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions. The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).
A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.
Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
Key Management Service AWS KMS, CloudHSM | Key Vault
Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).
Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.
Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post
Cloud computing is the new big thing in Information Technology. Everyone, every business will sooner or later adopt it, because of hosting cost benefits, scalability and more.
This blog outlines the Pros and Cons of Cloud Computing, Pros and Cons of Cloud Technology, Faqs, Facts, Questions and Answers Dump about cloud computing.
Cloud computing is an information technology paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility. Simply put, cloud computing is the delivery of computing services including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.
Cost effective & Time saving: Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters; the racks of servers, the round-the-clock electricity for power and cooling, and the IT experts for managing the infrastructure.
The ability to pay only for cloud services you use, helping you lower your operating costs.
Powerful server capabilities and Performance: The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
Powerful and scalable server capabilities: The ability to scale elastically; That means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when they’re needed, and from the right geographic location.
SaaS ( Software as a service). Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet, or PC.
PaaS ( Platform as a service). Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering, and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network, and databases needed for development.
IaaS ( Infrastructure as a service). The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis
Serverless: Running complex Applications without a single server. Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning, and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs.
Infrastructure provisioning as code, helps recreating same infrastructure by re-running the same code in a few click.
Automatic and Reliable Data backup and storage of data: Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.
Increase Productivity: On-site datacenters typically require a lot of “racking and stacking”—hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.
Security: Many cloud providers offer a broad set of policies, technologies, and controls that strengthen your security posture overall, helping protect your data, apps, and infrastructure from potential threats.
Speed: Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning. In a cloud computing environment, new IT resources are only a click away. This means that the time those resources are available to your developers is reduced from weeks to minutes. As a result, the organization experiences a dramatic increase in agility because the cost and time it takes to experiment and develop is lower
Go global in minutes Easily deploy your application in multiple regions around the world with just a few clicks. This means that you can provide a lower latency and better experience for your customers simply and at minimal cost.
Privacy: Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information.Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services.
Security: According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API’s, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities.
Ownership of Data: There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.
Limited Customization Options: Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want.
Downtime: Technical outages are inevitable and occur sometimes when cloud service providers (CSPs) become overwhelmed in the process of serving their clients. This may result to temporary business suspension.
Security of stored data and data in transit may be a concern when storing sensitive data at a cloud storage provider[10]
Users with specific records-keeping requirements, such as public agencies that must retain electronic records according to statute, may encounter complications with using cloud computing and storage. For instance, the U.S. Department of Defense designated the Defense Information Systems Agency (DISA) to maintain a list of records management products that meet all of the records retention, personally identifiable information (PII), and security (Information Assurance; IA) requirements
Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target.
Piracy and copyright infringement may be enabled by sites that permit filesharing. For example, the CodexCloud ebook storage site has faced litigation from the owners of the intellectual property uploaded and shared there, as have the GrooveShark and YouTube sites it has been compared to.
Public clouds: A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. They are owned and operated by a third-party cloud service providers, which deliver their computing resources, like servers and storage, over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser. For infrastructure as a service (IaaS) and platform as a service (PaaS), Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) hold a commanding position among the many cloud companies.
Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.
Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premise resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options, and helps optimize your existing infrastructure, security, and compliance.
Community Cloud: A community cloud in computing is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns, whether managed internally or by a third-party and hosted internally or externally. This is controlled and used by a group of organizations that have shared interest. The costs are spread over fewer users than a public cloud, so only some of the cost savings potential of cloud computing are realized.
What do the top 3 public cloud providers like AWS, Azure, Google cloud do to insure customer data loss?
As cloud user, cloud customer, company storing customer data in the cloud, you probably have a lot of personal or private data hosted in various infrastructure in the cloud. Losing that data or having the data accessed by hackers or unauthorized third party can be very harmful both financially and emotionally to you or your customers. A cloud User or Customer Insurance can protect you against data lost or stolen data. Practically, the cloud computing insurance is a cyber liability policy that covers web-based services. Before looking for a customer insurance in the cloud, you need to clarify “What data should the insurance cover and under which governing laws?“, “What data can be considered a loss?” . The good news is : as cloud adoption is increasing in the insurance industry, insurers have the opportunity to better understand their operations models and to implement tailored insurance solutions for cloud.
Cloud Data loss can happen in the following forms:
First Party Losses: losses where the cloud provider incurs damages. Those types of losses include:
Destruction of Data
Denial of Service Attack (DOS)
Virus, Malware and Spyware
Human Error
Electrical Malfunctions and Power Surges in data centers
Natural Disasters
Network Failures
Cyber Extortion
Each of the above exposures to loss would result in direct damages to the insured, or first-party loss.
Third-Party Losses – damages that would occur to customers outside of the cloud provider. These types of losses include:
The above exposures could result in a company being held liable for the damages caused to others (liability).
Cyber insurance is a form of insurance for businesses and individuals against internet-based risks. The most common risk that is insured against is data breaches. … It also covers losses from network security breaches, theft of intellectual property and loss of privacy.
Data Compromise coverage insures a commercial entity when there is a data breach, theft or unauthorized disclosure of personal information. … Thus Cyber Liability covers both the expenses to notify affected individuals of data breaches and the expenses to make the insured whole for their own damages incurred.
Contact an Independent Insurance Agent near you that writes Cyber Insurance and ask them to get multiple quotes for your business.
However, a more effective risk management solution might be loss control rather than financing. If you encrypt your data at rest and set up and adopt a process of automatic regular backups, and geographically distribute those backups , then you have effectively minimized the potential costs of loss.
Cyber Insurance is not yet standardized as many other forms of commercial insurance. Therefore, breadth of coverage and pricing can vary widely.
Access: As a customer, you maintain full control of your content and responsibility for configuring access to AWS services and resources. We provide an advanced set of access, encryption, and logging features to help you do this effectively (e.g., AWS Identity and Access Management, AWS Organizations and AWS CloudTrail). We provide APIs for you to configure access control permissions for any of the services you develop or deploy in an AWS environment. We do not access or use your content for any purpose without your consent. We never use your content or derive information from it for marketing or advertising.
Storage: You choose the AWS Region(s) in which your content is stored and the type of storage. You can replicate and back up your content in more than one AWS Region. We will not move or replicate your content outside of your chosen AWS Region(s) without your consent, except as legally required and as necessary to maintain the AWS services.
Security: You choose how your content is secured. We offer you strong encryption for your content in transit and at rest, and we provide you with the option to manage your own encryption keys. These features include:
Data encryption capabilities available in AWS storage and database services, such as Amazon Elastic Block Store, Amazon Simple Storage Service, Amazon Relational Database Service, and Amazon Redshift.
Flexible key management options, including AWS Key Management Service (KMS), allow customers to choose whether to have AWS manage the encryption keys or enable customers to keep complete control over their keys.
AWS customers can employ Server-Side Encryption (SSE) with Amazon S3-Managed Keys (SSE-S3), SSE with AWS KMS-Managed Keys (SSE-KMS), or SSE with Customer-Provided Encryption Keys (SSE-C).
Disclosure of customer content: We do not disclose customer information unless we’re required to do so to comply with a legally valid and binding order. Unless prohibited from doing so or there is clear indication of illegal conduct in connection with the use of Amazon products or services, Amazon notifies customers before disclosing content information.
Security Assurance: We have developed a security assurance program that uses best practices for global privacy and data protection to help you operate securely within AWS, and to make the best use of our security control environment. These security protections and control processes are independently validated by multiple third-party independent assessments
Property and Casualty Insurance: Property insurance covers the physical location of the business and its contents from things like fire, theft, flood, and earthquakes—although read the terms carefully to make sure they include everything you need. Casualty insurance, on the other hand, covers the operation of the business, but the two are usually grouped together in policies.
Auto Insurance:Auto insurance protects you against financial loss if you have an accident. It is a contract between you and the insurance company.
Liability Insurance: Liability insurance is insurance that provides protection against claims resulting from injuries and damage property.
Business Insurance: Business interruption insurance can make up for lost cash flow and profits incurred because of an event that has interrupted your normal business operations.
Health and Disability Insurance: Health insurance provides health coverage for you and your employees. This insurance covers your employees for the expenses and loss of income caused by non work-related injuries, illnesses, and disabilities and death from any cause.
Life Insurance: Life and disability insurance covers your business in the event of the death or disability of key owners.
Cyber Insurance: Cover Data loss, destruction of data, privacy breach, Denial of Service Attack (DOS), Network failure, Transmission of Malicious Content, Misuse of personal or private information, etc.
Crime & Employee Dishonesty Insurance: To cover your business for fraudulent acts committed by your employees, e.g. theft or embezzlement of money, securities, and other business-owned property and for burglary, theft, and robbery of cash and other representations of money, e.g. money orders, postage stamps, travelers checks, and readily convertible securities, e.g. bearer bonds;
Mandatory Workers Compensation Insurance: To cover your employees for injuries and illnesses sustained during the course of employment. This would include medical expenses and loss of income due to a work-related disability;
Transportation/Inland & Ocean Marine Insurance: To pay for loss of damage to property you own or are responsible for while it is being transported or shipped to or from customers, manufacturers, processors, assemblers, warehouses, etc. by air, ship, or land vehicles either domestically or internationally.
Umbrella Liability Insurance: To provide an additional layer of liability insurance over your primary automobile liability, general liability, employers liability, and, if applicable, watercraft or aircraft liability policies;
Directors & Officers Liability Insurance: To defend your business and its directors or officers against allegations that they mismanaged the business in some way which caused financial loss to your clients (and/or others) and pay money damages in a court trial or settlement;
Condos Unit Owners Personal Insurance & Landlord / Rental Property Insurance: Cover expenses that come from having a loss within your property. Whether the unit owner is living in their unit or not, it is your responsibility to ensure that your personal assets and liabilities are adequately protected by your own personal insurance policy. This coverage includes all the content items that are brought into a unit or stored in a storage locker or premises, such as furnishings, electronics, clothing, etc. Most policies out there will also cover personal property while it is temporary off premises, on vacation for example.
Landlord property coverage is to protect the property that you own within your rental unit, which includes but is not limited to, appliances, window coverings, or if you rent out your unit fully furnished, then all of that property that is yours.
Rental Property insurance coverage allows you to protect you revenue source. Your property is your responsibility and if you property gets damaged by an insured peril, and your tenant can’t live there for a month or two (or more), you can purchase insurance to replace that rental income for the period of time your property is inhabitable.
Do online businesses need insurance?
All businesses need insurance. Here are some suggestions:
Property Insurance: To cover your owned, non-owned, and leased business property (contents, buildings if applicable, computers, office supplies, and any other property that you need to operate your business) for such perils as fire, windstorm, smoke damage, water damage, and theft.
EDP Insurance: To cover your computer hardware and software for such perils as mechanical breakdown and electrical injury;
Cyber Property and Liability Insurance: To cover your business for its activities on the Internet. Cyber Property coverages apply to losses sustained by your company directly. An example is damage to your company’s electronic data files caused by a hacker/security breach. Cyber Liability coverages apply to claims against your company by people who have been injured as a result of your actions or failure to act. For instance, a client sues you for negligence after his personal data, e.g credit card numbers or confidential information is stolen from your computer system and released online.
Loss of Income (Business Interruption) Insurance: To cover your business for the loss of income you would sustain because it was damaged by a covered peril under your property insurance, e.g. fire, windstorm, smoke damage, and theft;
Thinking of purchasing cyber insurance? Make sure the policy you choose covers more than paying ransomware. Paying cyber criminals should be a last resort. Your policy should include cleaning & rebuilding current systems, hiring experts, & purchasing new protections.
The purpose of cyber security is to protect all forms of digital data. Protecting personal information (SSN, credit card information, etc.), protecting proprietary information .(Facebook algorithms, Tesla vehicle designs, etc.), and other forms of digital data.