Google Workspace – Docs – Drive – Sheets – Slides – How To

Google Workspace - Docs - Drive - Sheets - Slides - How To

Google Workspace – Docs – Drive – Sheets – Slides – Forms – How To

Top 10 Google Workspace Tips and Tricks

Google Workspace - Docs - Drive - Sheets - Slides - How To
Google Workspace – Docs – Drive – Sheets – Slides – How To
  1. Use keyboard shortcuts: Google Workspace has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action in Google Docs, or “Ctrl + Shift + V” to paste text without formatting.
  2. Collaborate in real-time: With Google Workspace, you can work on documents and spreadsheets with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
  3. Create and edit documents offline: With the Google Docs offline extension, you can create and edit documents even when you don’t have an internet connection. Once you’re back online, your changes will be automatically saved.
  4. Use Google Keep for notes and to-do lists: Google Keep is a simple note-taking app that integrates seamlessly with Google Workspace. You can use it to take notes, create to-do lists, and set reminders.
  5. Use the Explore feature in Google Docs: The Explore feature in Google Docs can help you research and write documents more quickly by suggesting relevant information, images, and citations.
  6. Automate tasks with Google Scripts: Google Scripts is a powerful scripting tool that you can use to automate tasks in Google Workspace. For example, you can use a script to automatically send an email when a new form is submitted or to create a calendar event from a Google Sheets spreadsheet.
  7. Use Google Forms for surveys and quizzes: Google Forms is a great tool for creating surveys, quizzes, and other forms. You can use it to collect information from people and analyze the results in Google Sheets.
  8. Take advantage of the Google Workspace Marketplace: The Google Workspace Marketplace is a collection of apps and add-ons that can help you customize and enhance your Google Workspace experience. You can find apps for a wide range of tasks, such as creating diagrams, signing documents electronically, and more.
  9. Use Google Slides for presentations: Google Slides is an online presentation tool that can be used to create professional-looking slideshows. You can collaborate with others in real-time, add animations and transitions, and even insert videos.
  10. Use Google Drive for file storage and sharing: Google Drive is the main storage service for all your files, including documents, images, videos, and more. You can share files and folders with others, collaborate in real-time, and access your files from anywhere.

These are some of the most useful tips and tricks for getting the most out of Google Workspace. The apps are constantly updated and many new features are added regularly.

Google Workspace Business Starter (20% Discount)
Promotion code for the Americas   |  Expires 07/2023
M9HNXHX3WC9H7YE
Google Workspace Business Standard (20% Discount)
Promotion code for the Americas   |  Expires 07/2023
96DRHDRA9J7GTN6

Top 10 Google Drive Tips and Tricks

  1. Use the “Quick Access” feature: Google Drive’s Quick Access feature uses machine learning to predict which files you might need next, and it surfaces them at the top of your Google Drive for easy access.
  2. Take advantage of the offline feature: With the Google Drive Offline extension, you can access and edit your files even when you don’t have an internet connection.
  3. Create shortcuts to frequently used files: You can create a shortcut to a file or folder by right-clicking on it and selecting “Add to My Drive.” This way, you can quickly access it from your Google Drive home screen.
  4. Use the “Take a Snapshot” feature: Google Drive has a built-in “Take a Snapshot” feature that allows you to take a screenshot of any webpage and save it directly to your Google Drive.
  5. Use the “Suggested Sharing” feature: Google Drive’s “Suggested Sharing” feature uses machine learning to predict which people you might want to share a file with, and it automatically suggests their email addresses to you.
  6. Search for files using specific keywords: You can use advanced search operators to search for files that contain specific keywords or were created by a certain person.
  7. Use the “File Stream” feature: With the Google File Stream feature, you can access all of your Google Drive files directly from your computer’s file explorer, without having to download them first.
  8. Use the “Add-ons” feature: You can use Google Drive’s Add-ons feature to add extra functionality to your Google Drive, such as the ability to sign PDFs, send emails directly from Google Drive, and more.
  9. Use the “Activity” feature: Google Drive’s “Activity” feature allows you to see who has accessed a file, when they accessed it, and what changes they made.
  10. Use the “Google Backup and Sync” app: Google Backup and Sync is a handy app that allows you to automatically back up specific folders from your computer to your Google Drive. This way, you can be sure that your important files are always safe and accessible.

These are some of the most useful tips and tricks for getting the most out of Google Drive. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your files.

How do you insert an image into a slide on Google Drive?

To insert an image into a slide on Google Drive, you can use the Google Slides app. Here are the steps:

  1. Open Google Slides in your browser and navigate to the presentation you want to add the image to.
  2. Select the slide you want to add the image to.
  3. Click on the “Insert” menu at the top of the screen.
  4. Select “Image” from the drop-down menu.
  5. Choose the option to “Upload” an image, then select the image you want to insert from your computer.
  6. You can also select “By URL” if you have image link and paste the image link.
  7. Drag the image around the slide to reposition it or use the handles to resize it.
  8. Once you have the image positioned and sized the way you want, you can add text or other elements to the slide as needed.

Alternatively, you can also Drag and drop an image directly from your computer to your slide.

It should be noted that the slide should be in edit mode, otherwise you will not be able to insert image.

How can you rotate an image in Google Drive without having to download it first?

You can rotate an image in Google Drive without downloading it by using the “Preview” feature. To do this, follow these steps:

  1. Open Google Drive and navigate to the folder containing the image you want to rotate.
  2. Click on the image to open it in the “Preview” mode.
  3. Click on the “Tools” button in the top-right corner of the screen.
  4. Click on “Rotate” from the menu that appears.
  5. Select the desired rotation angle.
  6. Click on the “Save” button to save the changes to the image.

Alternatively, if you want to rotate multiple images at once, you can select the images you want to rotate and then right click on the selected files and select rotate or use the key board shortcuts Ctrl+Shift+T

You can also use many other editing tools within the preview itself to edit images.


Custom AI Chatbot

Imagine a 24/7 virtual assistant that never sleeps, always ready to serve customers with instant, accurate responses.

Contact us here to book a demo and receive a personalized value proposition



GeoVision AI

We combine the power of GIS and AI to deliver instant, actionable intelligence for organizations that rely on real-time data gathering. Our unique solution leverages 🍇 GIS best practices and 🍉 Power Automate for GIS integration to collect field data—texts, photos, and geolocation—seamlessly. Then, through 🍊 Generative AI for image analysis, we deliver immediate insights and recommendations right to your team’s inbox and chat tools.

Contact us here to book a demo and receive a personalized value proposition


Can you create documents directly from Google Drive?

Yes, it is possible to create documents directly from Google Drive. Google Drive is a cloud-based storage service provided by Google that allows users to store, share, and access files from any device. It also includes a suite of productivity tools, including Google Docs, Google Sheets, and Google Slides, that allow users to create, edit, and collaborate on documents, spreadsheets, and presentations, respectively.

To create a new document in Google Drive, you can follow these steps:

  1. Open Google Drive by going to drive.google.com or by opening the Google Drive app on your device.
  2. Click on the “+ New” button on the top left corner of the screen.
  3. Select “Google Docs”, “Google Sheets” or “Google Slides” from the drop-down menu.
  4. A new document will be created and will open in a new tab.

You can also create a new document by right-clicking on the Google Drive window and selecting “New” from the context menu. The new document will be saved to your Google Drive and can be accessed, edited, and shared with others. You can also upload existing documents to Google Drive and convert them to Google Docs, Sheets or Slides format to edit them collaboratively.

Do you need a Google Drive account to view files that are shared with you?

No, you do not need a Google Drive account to view files that are shared with you. If someone shares a file with you on Google Drive, they can give you access to it by sending you a link to the file, or by adding you as a collaborator. When you click on the link, you can view the file in your browser without having to sign in to a Google account.

However, if the file is shared with you as “view-only” and the owner of the file has enabled the option of “Restrict editing” you will only be able to view the file and cannot download, print or copy it. If you want to have the full access of the file and also want to collaborate on it, you will need to sign in to a Google account or create a new one.

It is important to note that the shared link may be password protected, or the link may expire after a certain period of time. Additionally, if the person sharing the file has enabled access restrictions, such as only allowing certain people or certain domains to access the file, you may not be able to view the file if you do not meet those requirements.

How can you edit documents with sensitive information on Google Drive?

There are several ways to edit documents with sensitive information on Google Drive:

  1. Use Google’s built-in security features: Google Drive offers several security features that can help protect sensitive information, such as two-factor authentication, password-protected sharing, and remote wipe. These features can help keep your documents secure while you’re editing them.
  2. Use a password-protected file format: Many file formats, such as Microsoft Office or PDF, allow you to set a password to protect the document from unauthorized access. This means that even if someone gains access to your Google Drive account, they won’t be able to view or edit the document without the password.
  3. Use a third-party encryption tool: you can use third-party encryption tools to encrypt the documents before uploading them to Google Drive. This ensures that the documents are secure even if someone gains access to your Google Drive account.
  4. Limit access to specific people: you can use Google Drive’s sharing feature to limit access to specific people, such as people within your organization or specific email addresses. This ensures that only authorized individuals can view or edit the document.
  5. Use Google Workspace’s additional security features: Google Workspace (previously G Suite) offers additional security features such as data loss prevention, advanced threat protection, and compliance with industry standards like HIPAA and SOC2.

It’s important to note that while these methods can help protect sensitive information, they are not foolproof and it’s always a good idea to review the security and privacy settings and to use strong, unique passwords and two-factor authentication to help protect your data.

Top 10 Google Docs Tips and Tricks

  1. Use keyboard shortcuts: Google Docs has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action, or “Ctrl + Shift + V” to paste text without formatting.
  2. Collaborate in real-time: With Google Docs, you can work on documents with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
  3. Use the “Explore” feature: The Explore feature in Google Docs can help you research and write documents more quickly by suggesting relevant information, images, and citations.
  4. Use the “Research” feature: You can use the “Research” feature in Google Docs to find and insert quotes or information from external sources directly into your document.
  5. Use templates: Google Docs has a wide variety of templates available for different types of documents, such as resumes, letters, and more. These templates can help you get started quickly and ensure a professional look for your document.
  6. Use the “Voice Typing” feature: Google Docs has a built-in “Voice Typing” feature that allows you to dictate text into your document using your voice. This can be a great way to write more quickly, or to transcribe an audio recording.
  7. Use the “Add-ons” feature: You can use Google Docs’ Add-ons feature to add extra functionality to your documents, such as the ability to sign PDFs, create diagrams, and more.
  8. Use the “Commenting” feature: The commenting feature in Google Docs allows you to leave feedback or suggestions directly on a document, making it easy for others to see and respond to your comments.
  9. Use the “Track Changes” feature: The “Track Changes” feature in Google Docs allows multiple people to collaborate on a document and see each other’s changes and suggestions, but still keep the original document intact.
  10. Use the “Headings” feature: Using headings in Google Docs can help structure and organize your documents, making them more readable and easier to navigate. You can format text as headings, then use the “Table of Contents” feature to create a table of contents for the document based on the headings.

These are some of the most useful tips and tricks for getting the most out of Google Docs. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your writing and editing process.

How do we upload a large file on Google Docs? I am trying to upload a 316 page file but only 63 pages are uploading.

There are a few things you can try to upload a large file on Google Docs:

  • Use the Google Drive app: Google Drive app allows you to upload files up to 5 TB in size. You can download it from the Google Play or App Store and then use it to upload your large file.
  • Zip the file: Compress your file into a .zip or .rar file and then upload it to Google Drive. Once the file is uploaded, you can unzip it and open it in Google Docs.
  • Convert the file: If the file is in a format that is not compatible with Google Docs, convert it to a compatible format (such as .docx or .pdf) and then upload it.
  • Split the file: If you are unable to upload the file in one go, you can split it into smaller parts and upload them separately. Once all the parts are uploaded, you can merge them in Google Docs.
  • Check your internet connection: A weak internet connection can cause issues with uploading large files. Ensure that you are connected to a stable and fast internet connection.
  • Try using Google Chrome browser: Some users have reported that using Chrome browser instead of other browsers such as Firefox or Safari can help with uploading large files.

It’s worth noting that Google Docs has a maximum file size of 5 TB and if your file exceeds that size, you will not be able to upload it. In addition, make sure to also check if you have enough storage available in your Google Drive account.

Top 10 Google Slides Tips and Tricks

  1. Use keyboard shortcuts: Google Slides has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action, or “Ctrl + Shift + V” to paste text without formatting.
  2. Collaborate in real-time: With Google Slides, you can work on presentations with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
  3. Use the “Explore” feature: The Explore feature in Google Slides can help you research and write your presentation more quickly by suggesting relevant information, images, and citations.
  4. Use templates: Google Slides has a wide variety of templates available for different types of presentations, such as business, education, and more. These templates can help you get started quickly and ensure a professional look for your presentation.
  5. Use the “Add-ons” feature: You can use Google Slides’ Add-ons feature to add extra functionality to your presentations, such as the ability to create charts, diagrams, and more.
  6. Use the “Master” feature: The Master feature in Google Slides allows you to create a template slide, with a specific layout and design, that can be reused across multiple slides in the same presentation, making it easy to maintain consistency.
  7. Use the “Speaker Notes” feature: The “Speaker Notes” feature in Google Slides allows you to write notes for yourself about what you want to say for each slide, which can be helpful when giving a presentation.
  8. Use the “Animations” feature: Google Slides allows you to add animations to elements on your slide, to make your presentation more dynamic and engaging.
  9. Use the “Transitions” feature: The Transitions feature in Google Slides allows you to add effects between slides, such as fade, dissolve, and more, giving your presentation a polished look.
  10. Use the “Presenter View” feature: The “Presenter View” feature in Google Slides allows you to see the current slide, the next slide, your speaker notes, and a timer while presenting, so you can stay on track and keep your audience engaged.

These are some of the most useful tips and tricks for getting the most out of Google Slides. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your presentation-making process.

Top 10 Google Forms Tips and Tricks

Here are ten tips and tricks for using Google Forms:

  1. Use “Go to section based on answer” to create a branching form, where the questions a respondent sees are based on their previous answers.
  2. Use the “Required” option to ensure that respondents complete certain questions before submitting the form.
  3. Use the “Data validation” option to ensure that respondents enter certain types of information, such as a valid email address or a number within a certain range.
  4. Use the “Randomize order of questions” option to randomize the order of questions for each respondent, which can help prevent bias in your data.
  5. Use the “Limit to one response” option to ensure that each respondent can only submit the form once.
  6. Use the “Add collaboration” option to share the form with others and collaborate on it in real time.
  7. Use the “Schedule Form” option to automatically close your form on a specific date and time, or after a certain number of responses have been received.
  8. Use the “Autocomplete” option to make it easier for respondents to enter frequently used or personal information.
  9. Use the “File upload” option to collect files and documents from respondents, such as images or PDFs.
  10. Use the “Create a quiz” option to create a multiple-choice or checkbox quiz, and then use the “Grade” option to automatically grade the quiz and provide feedback to respondents.

Is there a way to find out the number of respondents in Google Forms without opening each respondent’s response?

Yes, there is a way to find out the number of respondents in Google Forms without opening each respondent’s response. You can view the summary of responses in the “Responses” tab of the Google Form. The summary will show the number of responses received, as well as the option to view the responses in a spreadsheet format. You can also filter the responses based on various criteria and download them to your computer. Additionally, you can use Google Forms add-ons such as “Form Publisher” or “FormMule” which allows you to send the responses to a Google Sheets or Excel, and then use the spreadsheet functions to analyse the data.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Top 10 Google Sheets Tips and Tricks

Here are ten tips and tricks for using Google Sheets:

  1. Use keyboard shortcuts to quickly navigate and perform common actions, such as ctrl+c to copy, ctrl+v to paste, and ctrl+z to undo.
  2. Use the “=QUERY” function to quickly filter and sort large data sets, similar to a SQL query.
  3. Use the “=IMPORTXML” function to import structured data from websites, such as stock prices or weather data.
  4. Use the “=IMPORTRANGE” function to import data from other sheets, such as data from a master sheet that is shared with multiple team members.
  5. Use the “=IF” function to perform basic calculations, such as calculating sales tax or commission.
  6. Use the “=SUMIF” and “=COUNTIF” functions to perform mathematical operations based on a certain condition, such as summing all numbers in a range that are greater than a certain value.
  7. Use the “=Vlookup” function to lookup and retrieve data from other sheets or even other documents.
  8. Use the “=HLOOKUP” function to do a horizontal lookup.
  9. Use the “Data validation” option to ensure that data entered in a certain range of cells meets certain conditions, such as being a whole number or a date within a certain range.
  10. Use the “Conditional formatting” option to format cells based on their contents, such as making all negative numbers red, or highlighting cells that contain a certain keyword.

How can I import LinkedIn searches into Google Sheets?

There are a few different ways to import LinkedIn searches into Google Sheets:

  1. Use a LinkedIn scraper tool: There are a number of LinkedIn scraper tools available online that can be used to scrape data from LinkedIn searches and export it to Google Sheets. Some popular options include Hunter.io, Skrapp.io, and LeadLeaper.
  2. Use a LinkedIn API: LinkedIn offers an API that allows developers to access data from LinkedIn searches. You can use this API to extract data from LinkedIn searches and import it into Google Sheets using a script or a tool like Import.io
  3. Use Google Sheets Add-ons: There are several add-ons available for Google Sheets that allow you to import data from LinkedIn searches. Some popular options include Hunter, LinkedIn Sales Navigator, and LinkedIn Lead Gen Forms.
  4. Use a manual copy-paste method: You can also use a manual copy-paste method to import LinkedIn searches into Google Sheets. You can perform a search on LinkedIn, go through the results, and copy-paste the data you want into a Google Sheet.

Please note that some of these methods may require a LinkedIn premium account or may have limitations on the amount of data that can be scraped. Also, some scraping methods may violate LinkedIn terms of service.

Top 10 Google Search Tips and Tricks

What are top 10 Google Search Tips and Tricks that very few people know about?

  1. Use quotation marks to search for an exact phrase: If you want to search for a specific phrase, enclose it in quotation marks. Example: “Google Search Tips and Tricks”
  2. Use the minus sign to exclude specific words: If you want to exclude specific words from your search, use the minus sign (-) before the word you want to exclude. Example: Google Search Tips and Tricks – few
  3. Use the site: operator to search within a specific website: If you want to search for something within a specific website, use the site: operator followed by the website’s URL. Example: Google Search Tips site:www.example.com
  4. Use the filetype: operator to find specific file types: If you want to find a specific file type, use the filetype: operator followed by the file extension. Example: Google Search Tips filetype:pdf
  5. Use the related: operator to find related websites: If you want to find websites related to a specific website, use the related: operator followed by the website’s URL. Example: related:www.example.com
  6. Use the define: operator to find definitions: If you want to find the definition of a word, use the define: operator followed by the word. Example: define:Google
  7. Use the link: operator to find websites that link to a specific website: If you want to find websites that link to a specific website, use the link: operator followed by the website’s URL. Example: link:www.example.com
  8. Use the cache: operator to view a website’s cached version: If you want to view a website’s cached version, use the cache: operator followed by the website’s URL. Example: cache:www.example.com
  9. Use the intext: operator to search for specific words within a webpage: If you want to search for specific words within a webpage, use the intext: operator followed by the word. Example: intext:Google Search Tips
  10. Use the inurl: operator to search for specific words within a URL: If you want to search for specific words within a URL, use the inurl: operator followed by the word. Example: inurl:Google Search Tips

These are just a few of the many advanced search techniques that can be used on Google, and can help you find more specific and relevant results. Keep in mind that Google’s search algorithm is constantly evolving so some of the tips may not work as expected, but they’re still worth trying.

What challenges remain in advancing the safety and privacy features of Google Images?

There are several challenges that remain in advancing the safety and privacy features of Google Images:

  1. Identifying and removing inappropriate content: Identifying and removing inappropriate content, such as child sexual abuse material, remains a major challenge for Google Images. Despite the use of machine learning algorithms and human moderators, it can be difficult to accurately identify and remove all inappropriate content.
  2. Protecting personal privacy: Protecting the privacy of individuals whose images appear on Google Images is also a challenge. Google has implemented features such as “SafeSearch” to help users filter out explicit content, but there remains a risk that sensitive personal information could be exposed through reverse image searches.
  3. Dealing with misinformation: Google Images is also facing challenges in dealing with misinformation, as false or misleading information can be spread through images.
  4. Balancing user’s rights with copyright infringement: Balancing the rights of users to access and share information with the rights of copyright holders to protect their work is a challenging issue for Google Images. Google has implemented a copyright removal process, but it can be difficult to effectively enforce copyright infringement on a large scale.
  5. Addressing the issue of deepfakes: With the advent of deepfakes, images can be manipulated and deepfake images can be difficult to detect, this is a new challenge for Google Images to address.
  6. Addressing the needs of visually impaired: Making sure that the images in Google Images are accessible to the visually impaired is another important challenge for Google.

Google continue to invest in developing technology and policies to address these challenges and to ensure the safety and privacy of users on their platform. However, given the scale and complexity of these issues, it is likely that challenges will continue to arise in the future.

AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

AWS Azure Google Cloud Certifications Testimonials and Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

https://apps.apple.com/ca/app/djamgatech-pro/id1574297762
AWS Azure Google Cloud Certifications Testimonials and Dumps
AWS Developer Associates DVA-C01 PRO
 

PASSED AWS CCP (2022)

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.

Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.

Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.


Custom AI Chatbot

Imagine a 24/7 virtual assistant that never sleeps, always ready to serve customers with instant, accurate responses.

Contact us here to book a demo and receive a personalized value proposition



GeoVision AI

We combine the power of GIS and AI to deliver instant, actionable intelligence for organizations that rely on real-time data gathering. Our unique solution leverages 🍇 GIS best practices and 🍉 Power Automate for GIS integration to collect field data—texts, photos, and geolocation—seamlessly. Then, through 🍊 Generative AI for image analysis, we deliver immediate insights and recommendations right to your team’s inbox and chat tools.

Contact us here to book a demo and receive a personalized value proposition


Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.

Just Passed My CCP exam today (within 2 weeks)

I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.

Resources:

-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)

-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)

Previous Aws knowledge:

-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)

Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)

Links to resources:

https://www.udemy.com/course/aws-certified-cloud-practitioner-new/

https://tutorialsdojo.com/courses/aws-certified-cloud-practitioner-practice-exams/

I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.

Passed – CCP CLF-C01

 

What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀

Passed with two weeks of prep (after work and weekends)


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Resources Used:

  • https://www.exampro.co/

    • This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.

  • Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**

    • These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.

Pass AWS CCP. The score is beyond expected

I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier.
Just a little bit of sharing, now I’ll find something to continue ^^

Good luck with your own exams.

Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.

Background

– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).

– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.

Study stats

– Spent three weeks studying for the exam.

– Studied an hour to two every day.

– Solved 800-1000 practice questions.

– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.

– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.

– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.

– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.

– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.

Resources used

– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal

– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis

– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*

– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**

– One or two free practice exams found by a quick Google search

*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.

**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.

Notes

– My study regimen (i.e., an hour to two every day for three weeks) was overkill.

– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.

– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.

What would I do differently?

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

– Focus on practice tests only. No video lectures.

– Focus on the technologies domain. You can intuit your way through questions in the other domains.

– Chill

What are the Top 100 AWS jobs you can get with an AWS certification in 2022 plus AWS Interview Questions
AWS SAA-C02 SAA-C03 Exam Prep

Just passed SAA-C03, thoughts on it

 
  • Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.

  • The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.

  • It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.

I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.

Passed SAA-C03 – Feedback

Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.

I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).

Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.

My Study Guide for passing the SAA-C03 exam

Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.

First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.

SAA-C03 Exam Prep

For my exam prep, I bought the adrian cantrill video coursetutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.

TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.

The Actual SAA-C03 Exam

The actual AWS exam is almost the same with the ones in the TD tests where:

  • All of the questions are scenario-based

  • There are two (or more) valid solutions in the question, e.g:

    • Need SSL: options are ACM and self-signed URL

    • Need to store DB credentials: options are SSM Parameter Store and Secrets Manager

  • The scenarios are long-winded and asks for:

    • MOST Operationally efficient solution

    • MOST cost-effective

    • LEAST amount overhead

Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.

Another Passed SAA-C03?

Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.

Background:

– graduate with networking background

– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.

– cloud experience, short period like 3-6 months with practice

– provisioned cloud application using terraform in azure and aws

Course that I used fully:

– AWS Certified Solutions Architect – Associate (SAA-C03) | learn.cantri (cantrill.io)

– AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path (tutorialsdojo.com)

Course that I used partially or little:

– Ultimate AWS Certified Solutions Architect Associate (SAA) | Udemy

– Practice Exams | AWS Certified Solutions Architect Associate | Udemy

Lab that I used:

– Free tier account with cantrill instruction

– Acloudguru lab and sandbox

– Percepio lab

Comment on course:

cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more

tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.

udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.

lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal

Advice:

you need to know some general exam topics like how to:

– s3 private access

– ec2 availability

– kinesis product including firehose, data stream, blabla

– iam

My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.

Good luck anyone!

Passed SAA

I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.

I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.

My company is offering stipend’s for each certification, so I’m going straight to developer next.

Recently passed SAA-C03

Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.

I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.

Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:

* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.

* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.

* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.

Not sure if they are graded or Amazon testing out new parts.

Cheers.

Another SAP-C01-Pass

Received my notification this morning that I passed 811.

Prep Time: 10 weeks 2hrs a day

Materials: Neil Davis videos/practice exam Jon Bonso practice exams White papers Misc YouTube videos Some hands on

Prof Experience: 4 years AWS using main services as architect

AWS Certs: CCP-SSA-DVA-SAP(now)

Thoughts: Exam was way more familiar to me than the Developer Exam. I use very little AWS developer tools but mainly use core AWS services. Neil’s videos were very straightforward, easy to digest, and on point. I was able to watch most of the videos on a plane flight to Vegas.

After video series I started to hit his section based exams, main exam, notes, and followed up with some hands on. I was getting destroyed on some of the exams early on and had to rewatch and research the topics, writing notes. There is a lot of nuance and fine details on the topics, you’ll see this when you take the practice exam. These little details matter.

Bonso’s exam were nothing less than awesome as per usual. Same difficulty and quality as Neil Davis. Followed the same routine with section based followed by final exam. I believe Neil said to aim for 80’s on his final exams to sit for the exam. I’d agree because that’s where I was hitting a week before the exam (mid 80’s). Both Neil and Jon exams were on par with exam difficulty if not a shade more difficult.

The exam itself was very straightforward. My experience is the questions were not overly verbose and were straight to the point as compared to the practice exams I took. I was able to quickly narrow down the questions and make a selection. Flagged 8 questions along the way and had 30min to review all my answers. Unlike some people, I didn’t feel like it was a brain melter and actually enjoyed the challenge. Maybe I’m a sadist who knows.

Advice: Follow Neil’s plan, bone up on weak areas and be confident. These questions have a pattern based upon the domain. Doing the practice exams enough will allow you to see the pattern and then research will confirm your suspicions. You can pass this exam!

Good luck to those preparing now and god speed.

 
AWS Developer Associate DVA-C01 Exam Prep
 
 
 

I Passed AWS Developer Associate Certification DVA-C01 Testimonials

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers
AWS Developer Associate DVA-C01 Exam Prep

Passed DVA-C01

Passed the certified developer associate this week.

Primary study was Stephane Maarek’s course on Udemy.

I also used the Practice Exams by Stephane Maarek and Abhishek Singh.

I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.

The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.

Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.

Cleared AWS Certified Developer – Associate (DVA-C01)

 

I cleared Developer associate exam yesterday. I scored 873.
Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool)
3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.

Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.

Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.

Another Passed Associate Developer Exam (DVA-C01)

Already had passed the Associate Architect Exam (SA-C03) 3 months ago, so I got much more relaxed to the exam, I did the exam with Pearson Vue at home with no problems. Used Adrian Cantrill for the course together with the TD exams.

Studied 2 weeks a 1-2 hours since there is a big overlap with the associate architect couse, even tho the exam has a different approach, more focused on the Serverless side of AWS. Lots of DynamoDB, Lambda, API Gateway, KMS, CloudFormation, SAM, SSO, Cognito (User Pool and Identity Pool), and IAM role/credentials best practices.

I do think in terms of difficulty it was a bit easier than the Associate Architect, maybe it is made up on my mind as it was my second exam so I went in a bit more relaxed.

Next step is going for the Associate Sys-Ops, I will use Adrian Cantrill and Stephane Mareek courses as it is been said that its the most difficult associate exam.

Passed the SCS-C01 Security Specialty 

Passed the SCS-C01 Security Specialty
Passed the SCS-C01 Security Specialty

Mixture of Tutorial Dojo practice exams, A Cloud Guru course, Neal Davis course & exams helped a lot. Some unexpected questions caught me off guard but with educated guessing, due to the material I studied I was able to overcome them. It’s important to understand:

  1. KMS Keys

    1. AWS Owned Keys

    2. AWS Managed KMS keys

    3. Customer Managed Keys

    4. asymmetrical

    5. symmetrical

    6. Imported key material

    7. What services can use AWS Managed Keys

  2. KMS Rotation Policies

    1. Depending on the key matters the rotation that can be applied (if possible)

  3. Key Policies

    1. Grants (temporary access)

    2. Cross-account grants

    3. Permanent Policys

    4. How permissions are distributed depending on the assigned principle

  4. IAM Policy format

    1. Principles (supported principles)

    2. Conditions

    3. Actions

    4. Allow to a service (ARN or public AWS URL)

    5. Roles

  5. Secrets Management

    1. Credential Rotation

    2. Secure String types

    3. Parameter Store

    4. AWS Secrets Manager

  6. Route 53

    1. DNSSEC

    2. DNS Logging

  7. Network

    1. AWS Network Firewall

    2. AWS WAF (some questions try to trick you into thinking AWS Shield is needed instead)

    3. AWS Shield

    4. Security Groups (Stateful)

    5. NACL (Stateless)

    6. Ephemeral Ports

    7. VPC FlowLogs

  8. AWS Config

    1. Rules

    2. Remediation (custom or AWS managed)

  9. AWS CloudTrail

    1. AWS Organization Trails

    2. Multi-Region Trails

    3. Centralized S3 Bucket for multi-account log aggregation

  10. AWS GuardDuty vs AWS Macie vs AWS Inspector vs AWS Detective vs AWS Security Hub

It gets more in depth, I’m willing to help anyone out that has questions. If you don’t mind joining my Discord to discuss amongst others to help each other out will be great. A study group community. Thanks. I had to repost because of a typo 🙁

https://discord.gg/pZbEnhuEY9

Passed the Security Specialty

Passed Security Specialty yesterday.

Resources used were:

Adrian (for the labs), Jon (For the Test Bank),

Total time spent studying was about a week due to the overlap with the SA Pro I passed a couple weeks ago.

Now working on getting Networking Specialty before the year ends.

My longer term goal is to have all the certs by end of next year.

 

Advanced Networking - Specialty

Advanced Networking – Specialty

Passed AWS Certified advanced networking – Specialty ANS-C01 2 days ago

 

This was a tough exam.

Here’s what I used to get prepped:

Exam guide book by Kam Agahian and group of authors – this just got released and has all you need in a concise manual, it also included 3 practice exams, this is a must buy for future reference and covers ALL current exam topics including container networking, SD-WAN etc.

Stephane Maarek’s Udemy course – it is mostly up-to-date with the main exam topics including TGW, network firewall etc. To the point lectures with lots of hands-on demos which gives you just what you need, highly recommended as well!

Tutorial Dojos practice tests to drive it home – this helped me get an idea of the question wording, so I could train myself to read fast, pick out key words, compare similar answers and build confidence in my knowledge.

Crammed daily for 4 weeks (after work, I have a full time job + family) and went in and nailed it. I do have networking background (15+ years) and I am currently working as a cloud security engineer and I’m working with AWS daily, especially EKS, TGW, GWLB etc.

For those not from a networking background – it would definitely take longer to prep.

Good luck!

 
 
 
 
Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

 

Passed AZ-900, SC-900, AI-900, and DP-900 within 6 weeks!

 
Achievement Celebration

What an exciting journey. I think AZ-900 is the hardest probably because it is my first Microsoft certification. Afterwards, the others are fair enough. AI-900 is the easiest.

I generally used Microsoft Virtual Training Day, Cloud Ready Skills, Measureup and John Savill’s videos. Having built a fundamental knowledge of the Cloud, I am planning to do AWS CCP next. Wish me luck!

Passed Azure Fundamentals

 
Learning Material

Hi all,

I passed my Azure fundamentals exam a couple of days ago, with a score of 900/1000. Been meaning to take the exam for a few months but I kept putting it off for various reasons. The exam was a lot easier than I thought and easier than the official Microsoft practice exams.

Study materials;

  • A Cloud Guru AZ-900 fundamentals course with practice exams

  • Official Microsoft practice exams

  • MS learning path

  • John Savill’s AZ-900 study cram, started this a day or two before my exam. (Highly Recommended) https://www.youtube.com/watch?v=tQp1YkB2Tgs&t=4s

Will be taking my AZ-104 exam next.

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep

Passed AZ-104 with about a 6 weeks prep

 
Learning Material

Resources =

John Savill’s AZ-104 Exam Cram + Master Class Tutorials Dojo Practice Exams

John’s content is the best out there right now for this exam IMHO. I watched the cram, then the entire master class, followed by the cram again.

The Tutorials Dojo practice exams are essential. Some questions on the actual exam where almost word-for-word what I saw on the exam.

Question:

What’s everyone using for the AZ-305? Obviously, already using John’s content, and from what I’ve read the 305 isn’t too bad.

Thoughts?

Passed the AZ-140 today!!

 
Achievement Celebration

I passed the (updated?) AZ-140, AVD specialty exam today with an 844. First MS certification in the bag!

Edited to add: This video series from Azure Academy was a TON of help.

https://youtube.com/playlist?list=PL-V4YVm6AmwW1DBM25pwWYd1Lxs84ILZT

Passed DP-900

 
Achievement Celebration

I am pretty proud of this one. Databases are an area of IT where I haven’t spent a lot of time, and what time I have spent has been with SQL or MySQL with old school relational databases. NoSQL was kinda breaking my brain for a while.

Study Materials:

  1. Microsoft Virtual Training Day, got the voucher for the free exam. I know several people on here said that was enough for them to pass the test, but that most certainly was not enough for me.

  2. Exampro.co DP-900 course and practice test. They include virtual flashcards which I really liked.

  3. Whizlabs.com practice tests. I also used the course to fill in gaps in my testing.

Passed AI-900! Tips & Resources Included!!

Azure AI Fundamentals AI-900 Exam Prep
Azure AI Fundamentals AI-900 Exam Prep
 
Achievement Celebration

Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900.

Here’s the order in which I passed my AWS and Azure certifications:

SAA>DVA>SOA>DOP>SAP>CLF|AZ-900>DP-900>AI-900

I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can.

Here’s my study plan for AZ-900 and DP-900 exams:

  • finish a popular video course aimed at the cert

  • watch John Savill’s study/exam cram

  • take multiple practice exams scoring in 90s

This is what I used for AI-900:

  • Alan Rodrigues’ video course (includes 2 practice exams) 👌

  • John Savill’s study cram 💪

  • practice exams by Scott Duffy and in 28Minutes Official 👍

  • knowledge checks in AI modules from MS learn docs 🙌

I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes.

https://youtu.be/utknpvV40L0 👏

Just be clear on the topics explained by the above video and you’ll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam

Just passed AZ-104

 
Achievement Celebration

I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs.

Received very little of this:

  • Containers

  • Storage

  • Monitoring

I passed with a 710 but a pass is a pass haha.

Used tutorial dojos but the closest questions I found where in the Udemy testing exams.

Regards,

Passed GCP Professional Cloud Architect

Google Professional Cloud Architect Practice Exam 2022
Google Professional Cloud Architect Practice Exam 2022
 

First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.

I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.

In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.

As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.

I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.

One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.

I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.

All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.

GCP Associate Cloud Engineer Exam Prep

Passed GCP: Cloud Digital Leader

Hi everyone,

First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience.

Preparation

I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet.

AGU also has three – 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren’t that hard).

I don’t know if someone could pass the test if they just watched the videos on Google Cloud’s certification site, especially if you had no experience with GCP.

Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I’m working on my CCSP. After taking the test, I realized I way over prepared.

Exam Center

It was my first time at this testing center and I wasn’t happy with the experience. A few of the issues I had are:

– My personal items (phone, keys) were placed in an unlocked filing cabinet

– My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space

– The laminated sheet they gave me looked like someone had spilled Kool-Aid on it

– They only offered earplugs, instead of noise cancelling headphones

Exam

My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do.

I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer.

Passed the Google Cloud: Associate Cloud Engineer

Hey all, I was able to pass the Google Cloud: Associate Cloud Engineer exam in 27 days.

I studied about 3-5 hours every single day.

I created this note to share with the resources I used to pass the exam.

Happy studying!

GCP ACE Exam Aced

Hi folks,

I am glad to share with you that I have cleared by GCP ACE exam today and would like to share my preparation with you:

1)I completed these courses from Coursera:

1.1 Google Cloud Platform Fundamentals – Core Infrastructure

1.2 Essential Cloud Infrastructure: Foundation

1.3 Essential Cloud Infrastructure: Core Services

1.4 Elastic Google Cloud Infrastructure: Scaling and Automation

Post these courses, I did couple of QwikLab courses as listed in orderly manner:

2 Getting Started: Create and Manage Cloud Resources (Qwiklabs Quest)

   2.1 A Tour of Qwiklabs and Google Cloud

   2.2 Creating a Virtual Machine

   2.2 Compute Engine: Qwik Start – Windows

   2.3 Getting Started with Cloud Shell and gcloud

   2.4 Kubernetes Engine: Qwik Start

   2.5 Set Up Network and HTTP Load Balancers

   2.6 Create and Manage Cloud Resources: Challenge Lab

 3 Set up and Configure a Cloud Environment in Google Cloud (Qwiklabs Quest)

   3.1 Cloud IAM: Qwik Start

   3.2 Introduction to SQL for BigQuery and Cloud SQL

   3.3 Multiple VPC Networks

   3.4 Cloud Monitoring: Qwik Start

   3.5 Deployment Manager – Full Production [ACE]

   3.6 Managing Deployments Using Kubernetes Engine

   3.7 Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab

 4 Kubernetes in Google Cloud (Qwiklabs Quest)

   4.1 Introduction to Docker

   4.2 Kubernetes Engine: Qwik Start

   4.3 Orchestrating the Cloud with Kubernetes

   4.4 Managing Deployments Using Kubernetes Engine

   4.5 Continuous Delivery with Jenkins in Kubernetes Engine

Post these courses I did the following for mock exam preparation:

  1. Jon Bonso Tutorial Dojo -GCP ACE preparation

  2. Udemy course:

https://www.udemy.com/course/google-associate-cloud-engineer-practice-exams-2021-d/learn/quiz/5278722/results?expanded=591254338#overview

And yes folks this took me 3 months to prepare. So take your time and prepare it.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900 #DP900 #GCPACE

Comparison of AWS vs Azure vs Google

Cloud computing has revolutionized the way companies develop applications. Most of the modern applications are now cloud native. Undoubtedly, the cloud offers immense benefits like reduced infrastructure maintenance, increased availability, cost reduction, and many others.

However, which cloud vendor to choose, is a challenge in itself. If we look at the horizon of cloud computing, the three main providers that come to mind are AWS, Azure, and Google cloud. Today, we will compare the top three cloud giants and see how they differ. We will compare their services, specialty, and pros and cons. After reading this article, you will be able to decide which cloud vendor is best suited to your needs and why.

History and establishment

AWS

AWS is the oldest player in the market, operating since 2006. Here’s a brief history of AWS and how computing has changed. Being the first in the cloud industry, it has gained a particular advantage over its competitors. It offers more than 200+ services to its users. Some of its notable clients include:

  • Netflix
  • Expedia
  • Airbnb
  • Coursera
  • FDA
  • Coca Cola

Azure

Azure by Microsoft started in 2010. Although it started four years later than AWS, it is catching up quite fast. Azure is Microsoft’s public cloud platform which is why many companies prefer to use Azure for their Microsoft-based applications. It also offers more than 200 services and products. Some of its prominent clients include:

  • HP
  • Asus
  • Mitsubishi
  • 3M
  • Starbucks
  • CDC (Center of Disease Control) USA
  • National health service (NHS) UK

Google

Google Cloud also started in 2010. Its arsenal of cloud services is relatively smaller compared to AWS or Azure. It offers around 100+ services. However, its services are robust, and many companies embrace Google cloud for its specialty services. Some of its noteworthy clients include:

  • PayPal
  • UPS
  • Toyota
  • Twitter
  • Spotify
  • Unilever

Market share & growth rate

If you look at the market share and growth chart below, you will notice that AWS has been leading for more than four years. Azure is also expanding fast, but it is still has a long way to go to catch up with AWS.

However, in terms of revenue, Azure is ahead of AWS. In Q1 2022, AWS revenue was $18.44 billion; Azure earned $23.4 billion, while Google cloud earned $5.8 billion.

Availability Zones (Data Centers)

When comparing cloud vendors, it is essential to see how many regions and availability zones are offered. Here is a quick comparison between all three cloud vendors in terms of regions and data centers:

AWS

AWS operates in 25 regions and 81 availability zones. It offers 218+ edge locations and 12 regional edge caches as well. You can utilize the edge location and edge caches in services like AWS Cloudfront and global accelerator, etc.

Azure

Azure has 66 regions worldwide and a minimum of three availability zones in each region. It also offers more than 116 edge locations.

Google

Google has a presence in 27 regions and 82 availability zones. It also offers 146 edge locations.

Although all three cloud giants are continuously expanding. Both AWS and Azure offer data centers in China to specifically cater for Chinese consumers. At the same time, Azure seems to have broader coverage than its competitors.

Comparison of common cloud services

Let’s look at the standard cloud services offered by these vendors.

Compute

Amazon’s primary compute offering is EC2 instances, which are very easy to operate. Amazon also provides a low-cost option called “Amazon lightsail” which is a perfect fit for those who are new to computing and have a limited budget. AWS charges for EC2 instances only when you are using them. Azure’s compute offering is also based on virtual machines. Google is no different and offers virtual machines in Google’s data centers. Here’s a brief comparison of compute offerings of all three vendors:

Storage

All three vendors offer various forms of storage, including object-based storage, cold storage, file-based storage, and block-based storage. Here’s a brief comparison of all three:

Database

All three vendors support managed services for databases. They also offer NoSQL as well as document-based databases. AWS also provides a proprietary RDBMS named “Aurora”, a highly scalable and fast database offering for both MySQL and PostGreSQL. Here’s a brief comparison of all three vendors:

Comparison of Specialized services

All three major cloud providers are competing with each other in the latest technologies. Some notable areas of competition include ML/AI, robotics, DevOps, IoT, VR/Gaming, etc. Here are some of the key specialties of all three vendors.

AWS

Being the first and only one in the cloud market has many benefits, and Amazon has certainly taken advantage of that. Amazon has advanced specifically in AI and machine learning related tools. AWS DeepLens is an AI-powered camera that you can use to develop and deploy machine learning algorithms. It helps you with OCR and image recognition. Similarly, Amazon has launched an open source library called “Gluon” which helps with deep learning and neural networks. You can use this library to learn how neural networks work, even if you lack any technical background. Another service that Amazon offers is SageMaker. You can use SageMaker to train and deploy your machine learning models. It contains the Lex conversational interface, which is the backbone of Alexa, Lambda, and Greengrass IoT messaging services.

Another unique (and recent) offering from AWS is IoT twinmaker. This service can create digital twins of real-world systems like factories, buildings, production lines, etc.

AWS is even providing a service for Quantum computing called AWS Braket.

Azure

Azure excels where you are already using some Microsoft products, especially on-premises Microsoft products. Organizations already using Microsoft products prefer to use Azure instead of other cloud vendors because Azure offers a better and more robust integration with Microsoft products.

Azure has excellent services related to ML/AI and cognitive services. Some notable services include Bing web search API, Face API, Computer vision API, text analytics API, etc.

Google

Google is the current leader of all cloud providers regarding AI. This is because of their open-source Google library TensorFlow, the most popular library for developing machine learning applications. Vertex AI and BigQueryOmni are also beneficial services offered lately. Similarly, Google offers rich services for NLP, translation, speech, etc.

Pros and Cons

Let’s summarize the pros and cons for all three cloud vendors:

AWS

Pros:

  • An extensive list of services
  • Huge market share
  • Support for large businesses
  • Global reach

Cons:

  • Pricing model. Many companies struggle to understand the cost structure. Although AWS has improved the UX of its cost-related reporting in the AWS console, many companies still hesitate to use AWS because of a perceived lack of cost transparency

Azure

Pros:

  • Excellent integration with Microsoft tools and software
  • Broader feature set
  • Support for open source

Cons:

  • Geared towards enterprise customers

Google

Pros:

  • Strong integration with open source tools
  • Flexible contracts
  • Good DevOps services
  • The most cost-efficient
  • The preferred choice for startups
  • Good ML/AI-based services

Cons:

  • A limited number of services as compared to AWS and Azure
  • Limited support for enterprise use cases

Career Prospects

Keen to learn which vendor’s cloud certification you should go for ? Here is a brief comparison of the top three cloud certifications and their related career prospects:

AWS

As mentioned earlier, AWS has the largest market share compared to other cloud vendors. That means more companies are using AWS, and there are more vacancies in the market for AWS-certified professionals. Here are main reasons why you would choose to learn AWS:

Azure

Azure is the second largest cloud service provider. It is ideal for companies that are already using Microsoft products. Here are the top reasons why you would choose to learn Azure:

  • Ideal for experienced user of Microsoft services
  • Azure certifications rank among the top paying IT certifications
  • If you’re applying for a company that primarily uses Microsoft Services

Google

Although Google is considered an underdog in the cloud market, it is slowly catching up. Here’s why you may choose to learn GCP.

  • While there are fewer job postings, there is also less competition in the market
  • GCP certifications rank among the top paying IT certifications

Most valuable IT Certifications

Keen to learn about the top paying cloud certifications and jobs? If you look at the annual salary figures below, you can see the average salary for different cloud vendors and IT companies, no wonder AWS is on top. A GCP cloud architect is also one of the top five. The Azure architect comes at #9.

Which cloud certification to choose depends mainly on your career goals and what type of organization you want to work for. No cloud certification path is better than the other. What matters most is getting started and making progress towards your career goals. Even if you decide at a later point in time to switch to a different cloud provider, you’ll still benefit from what you previously learned.

Over time, you may decide to get certified in all three – so you can provide solutions that vary from one cloud service provider to the next.

Don’t get stuck in analysis-paralysis! If in doubt, simply get started with AWS certifications that are the most sought-after in the market – especially if you are at the very beginning of your cloud journey. The good news is that you can become an AWS expert when enrolling in our value-packed training.

Further Reading

You may also be interested in the following articles:

https://digitalcloud.training/entry-level-cloud-computing-jobs-roles-and-responsibilities/https://digitalcloud.training/aws-vs-azure-vs-google-cloud-certifications-which-is-better/https://digitalcloud.training/10-tips-on-how-to-enter-the-cloud-computing-industry/https://digitalcloud.training/top-paying-cloud-certifications-and-jobs/https://digitalcloud.training/are-aws-certifications-worth-it/

Source:

https://digitalcloud.training/comparison-of-aws-vs-azure-vs-google/


Get it on Apple Books
Get it on Apple Books

  • Empowering women with cloud and AI skills: Register for the Google Launchpad for Women series
    by (Training & Certifications) on February 6, 2025 at 2:00 pm

    Last year, we offered our first ever “Google Launchpad for Women” series to empower women within our customer ecosystem to grow their cloud and AI skills. The response from our customers has been tremendous: more than 11,000 women across a breadth of roles - sales, leadership, marketing, finance, and more have completed previous editions of the program. As a result, they are building critical skills that help them put AI to work in their jobs, grow their careers, and help transform their businesses. This year, in honor of International Women's Day, we are opening “Google Launchpad for Women,” to thousands of more customer participants, providing them with no-cost training, exam prep, and access to Google experts. Registration is now open to Google Cloud customers in the Americas, EMEA, and Japan, with the three-week program beginning on March 4th in Japan and March 6th in the Americas and EMEA. Program benefits include: Expert-led training: Two days of in-depth, instructor-led training covering key cloud concepts and best practices. Industry insights: Engage with Google Cloud experts through panel discussions on topics such as Generative AI. Exam preparation: Dedicated sessions to prepare for the Cloud Digital Leader certification exam. Complimentary exam voucher: Participants will receive a voucher for the $99 exam fee. aside_block <ListValue: [StructValue([('title', 'Get hands-on experience for free'), ('body', <wagtail.rich_text.RichText object at 0x3e9487116e50>), ('btn_text', 'Start building for free'), ('href', 'http://console.cloud.google.com/freetrial?redirectPath=/welcome/'), ('image', None)])]> Why these trainings are critical Harnessing the power of cloud computing and AI is essential for all job roles, not just IT. As more businesses adopt AI, people across business roles utilize this technology every day and often make purchasing decisions about new AI platforms and tools. However, a talent gap remains, and is particularly pronounced for women, who represent about 14% of the global cloud workforce according to recent data from the World Economic Forum. We aim to help our customers reduce this gap, ensure they have access to the skilled experts they need to advance their digital and AI transformations, and give more people opportunities to grow their careers and lead these transformations. Ultimately, those who complete the Google Launchpad for Women program will be well-equipped to achieve the Cloud Digital Leader certification, putting them at the forefront of the cloud and AI era. Google Launchpad for Women is open to all Google Cloud customers, regardless of prior technical experience or role. We welcome women from all professional backgrounds who are eager to develop their cloud skills and advance their careers. While this initiative is specifically focused on women, we invite everyone to participate. Sign up today Visit the links below to learn more about each regional session and contact your sales rep to sign up today. [Americas Session] Google Launchpad for Women [EMEA Session] Google Launchpad for Women [Japan Session] Google Launchpad for Women

  • New Free Digital Course: AWS Well-Architected for Enterprises
    by Ebrahim (EB) Khiyami (AWS Training and Certification Blog) on February 1, 2025 at 1:00 am

    To help organizations implement Well-Architected best practices consistently while they expand, we’ve launched Well-Architected for Enterprises, a new free digital course. Designed for technical professionals who architect, build, and operate AWS solutions at scale, this intermediate level course will help you optimize your cloud architecture while aligning to your business goals.

  • New courses and certification updates from AWS Training and Certification in January 2025
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on January 28, 2025 at 5:00 pm

    In January 2025, we launched 17 new digital training products on AWS Skill Builder including two new AWS Jam Journeys, additional language availability for Exam Prep materials in support of AWS Certified AI Practitioner and AWS Certified Machine Learning Engineer certification exams, as well as a new AWS Builder Lab, Sustainability Strategies with AWS Compute Workload, designed to help identify sustainability strategies using AWS services and tools to optimize usage and costs in an AWS environment.

  • Meet the first graduates of AWS Cloud Institute
    by Carlie Marvel (AWS Training and Certification Blog) on January 17, 2025 at 8:55 pm

    In January 2024, we welcomed the first cohort of learners of AWS Cloud Institute, a comprehensive program designed to equip aspiring cloud builders with the skills needed for high-demand roles in cloud technology. Today, we’re thrilled to celebrate the graduation of this inaugural cohort of AWS Cloud Institute learners, marking a significant milestone in their journey toward launching successful careers in cloud development. Read how the program changed the lives of so many new cloud professionals.

  • Mapping your AI/ML career journey
    by Jim Sinkleris (AWS Training and Certification Blog) on January 6, 2025 at 8:42 pm

    The journey from AWS Certified AI Practitioner to AWS Certified Machine Learning - Specialty offers a structured approach that helps you grow—from understanding fundamental AI concepts to handling complex ML projects in a cloud environment. By using resources like AWS Skill Builder, AWS Educate and the Udemy Business Leadership Academy cohort programs, you can accelerate your learning and stay ahead of the competition in the fast-moving AI/ML landscape.

  • Maximizing your cloud journey: Engaging an AWS Solutions Architect
    by Paige Broderick (AWS Training and Certification Blog) on December 19, 2024 at 10:14 pm

    AWS Solutions Architects are a free resource to customers and partners and wear three crucial hats: technical advisor, customer advocate, and educator. Learn how you can engage AWS Solutions Architects and benefit from their time-tested expertise and best practices across AWS services, industries, and company sizes.

  • Boost your Looker Studio Pro skills with new on-demand course from Google Cloud
    by (Training & Certifications) on December 12, 2024 at 5:00 pm

    Your business data sets you apart from the competition. It fuels your innovations, your culture, and provides all your employees a foundation from which to build and explore. Since 2022, enterprises in all industries have turned to Looker Studio Pro to empower their businesses with self-service dashboards and AI-driven visualizations and insights, complete with advanced enterprise capabilities and Google Cloud technical support. As the Looker community has grown, we’ve gotten more requests for guidance on how users can make their Looker Studio Pro environments even stronger, and tap into more sophisticated features. Those requests have only increased, accelerated by the debut of Studio in Looker, which brings Looker Studio Pro to the broader Looker platform. To help, today we are debuting a new on-demand training course: Looker Studio Pro Essentials. aside_block <ListValue: [StructValue([('title', 'Try Google Cloud for free'), ('body', <wagtail.rich_text.RichText object at 0x3e94836dc430>), ('btn_text', 'Get started for free'), ('href', 'https://console.cloud.google.com/freetrial?redirectPath=/welcome'), ('image', None)])]> Looker Studio Pro connects businesses’ need to govern data access with individual employees’ needs to explore, build and ask questions. This Google Cloud Skills Boost course helps users go beyond the basics of setting up reports and visualizations, and provides a deep dive into Looker Studio Pro’s more powerful features and capabilities. Here's what you can expect to get from this course: Gain a comprehensive understanding of Looker Studio Pro: Explore its key features and functionality, and discover how it elevates your data analysis capabilities. Enhance collaboration: Learn how to create and manage collaborative workspaces, streamline report sharing, and automate report delivery. Schedule and share reports: Learn how to customize scheduling options to your business, including delivery of reports to multiple recipients via Google Chat and email, based on your sharing preferences. Ensure data security and control: Become an expert in user management, audit log monitoring, and other essential administrative tasks that can help you maintain data integrity. Leverage Google Cloud customer care: Learn how to use Google Cloud Customer Care resources to find solutions, report issues, and provide feedback. From your focus, to your employees, to your customers, your business is unique. That’s why we designed this course to bring value to everyone — from sales and marketing professionals, to data analysts, to product innovators — providing them with the knowledge and skills they need to fully leverage Looker Studio Pro in their own environments. Because in the gen AI era, how you leverage your data and invigorate your employees to do more is the true opportunity. Accelerate that opportunity with the new Looker Studio Pro Essentials course today.

  • Unwrap 12 days of training to learn generative AI this December
    by (Training & Certifications) on December 11, 2024 at 4:00 pm

    Tis the season for learning new skills! Get ready for 12 Days of Learning, a festive digital advent calendar packed with courses, hands-on labs, videos, and community opportunities—all designed to boost your generative AI expertise. Discover a new learning resource on Google Cloud’s social channels every day for twelve days this December.  Before you start: Get no-cost access to generative AI courses and labs Join the Innovators community to activate 35 monthly learning credits in Google Cloud Skills Boost at no cost. Use these credits to access courses and labs throughout the month of December—and beyond!  Ready to get started? Review all of the resources below. Get festive with generative AI foundations Learn how to use gen AI in your day-to-day work. These resources are designed for developers looking to gain foundational knowledge in gen AI. A Developer’s Guide to LLMs: In this 10-minute video, explore the exciting world of large language models (LLMs). Discover different AI model options, analyze pricing structures, and delve into essential features. Responsible AI: Fairness & Bias: This course introduces the concepts of responsible AI and shares practical methods to help you implement best practices using Google Cloud products and open source tools. Gemini for end-to-end SDLC: This course explores how Google Cloud's Gemini AI can assist in all stages of the software development lifecycle, from building and debugging web applications to testing and data querying. The course ends with a hands-on lab where you can build practical experience with Gemini. Responsible AI for Developers: Interpretability & Transparency: This course introduces AI interpretability and transparency concepts. Learn how to train a classification model on image data and deploy it to Vertex AI to serve predictions with explanations. Introduction to Security in the World of AI: This course equips security and data protection leaders with strategies to securely manage AI within their organizations. Bring these concepts to life with real-world scenarios from four different industries. aside_block <ListValue: [StructValue([('title', 'Get hands-on experience for free'), ('body', <wagtail.rich_text.RichText object at 0x3e9468abf760>), ('btn_text', 'Start building for free'), ('href', 'http://console.cloud.google.com/freetrial?redirectPath=/welcome/'), ('image', None)])]> Cozy up with gen AI use cases Launch these courses and labs to get more in-depth, hands-on experience with generative AI, from working with Gemini models to building agents and applications.  Build Generative AI Agents with Vertex AI and Flutter: Learn how to develop an app using Flutter and then integrate the app with Gemini. Then use Vertex AI Agent Builder to build and manage AI agents and applications. Machine Learning Ops (MLOps) for Generative AI: In this course, learn how to overcome MLOps challenges and use generative AI to streamline processes. Boost productivity with Gemini in BigQuery: This course provides an overview of features to assist in the data-to-AI workflow, including data exploration and preparation, code generation, workflow discovery, and visualization.  Build Generative AI Apps with Firebase Genkit: Learn how to integrate gen AI features into your applications using Firebase Genkit—from prototyping to production.  Website Modernization with Generative AI on Google Cloud: Transform your website experiences with gen AI with this hands-on course. Learn how to build a generative search experience in an interactive lab. Work with Gemini models in BigQuery: Through a practical use case involving customer relationship management, learn how to solve a real-world business problem with Gemini models. Plus, receive step-by-step guidance through coding solutions using SQL queries and Python notebooks. Get a jump-start on your New Year’s resolutions with AI Skills Quest Get an early start on your learning goals by signing up for AI Skills Quest, a monthly learning challenge that puts gen AI on your resume with verifiable skill badges. When you sign up, choose your path based on your level of knowledge: Beginner/Intermediate cohort: Learn fundamental AI concepts, prompt design, and Gemini app development techniques in Vertex AI, plus other Gemini integrations in various technical workflows. Advanced cohort: Already know the basics, but want to add breadth and depth to your AI skills? Sign up for the Advanced path to learn advanced AI concepts like RAG and MLOps. Ready to ring in the new year with new skills? Find more generative AI learning content on Google Cloud Skills Boost.

  • New courses and certification updates from AWS Training and Certification in December 2024
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on December 10, 2024 at 4:53 pm

    In December 2024, we launched nine new digital training products on AWS Skill Builder including five new AWS Builder Labs, a new AWS Jam focused on troubleshooting AWS Web Development issues in a gamified learning environment, and one new AWS Digital Classroom course. We also launched AWS Learning Assistant for AWS Builder Labs, a new AI-powered, chat-based guide that enhances self-paced learning by providing real-time responses and insights to learners.

  • Announcing AWS AI Skills Champions
    by Izabela Milewska (AWS Training and Certification Blog) on December 9, 2024 at 2:58 pm

    This past week at AWS re:Invent, we celebrated the organizations that went above and beyond to certify staff in AI/ML skills. AWS Certification hosted a reception at the AWS Certification Lounge to award AWS AI Skills Champion Trophies to these organizations as AWS AI Certification Early Adopters.

  • Navigate your AWS Certification journey like an AWS pro
    by Vimal Vyas (AWS Training and Certification Blog) on November 26, 2024 at 1:11 am

    We’re both experienced AWS professionals and have witnessed firsthand how cloud technologies can accelerate mission-critical initiatives and solve complex challenges in regulated industries. AWS Certifications have been an important part of our respective career progression and we’re passionate about sharing our certification experiences to help others. This blog outlines our AWS Certification learning journeys, the impact on our careers, and our best practices to prepare for an AWS Certification exam.

  • Accelerate your VMware journey with AWS Training
    by Nidhi Arora (AWS Training and Certification Blog) on November 18, 2024 at 11:46 pm

    AWS is committed to supporting customers in their transition to the cloud by offering a comprehensive training and resources for migrating VMware workloads to the AWS Cloud. Our training portfolio covers every stage of the migration process - from initial planning/migration, to modernization, and finally managing VMware workloads on AWS.

  • Upskill your team for generative AI projects with AWS Training
    by Kumar Kumaraguruparan (AWS Training and Certification Blog) on November 8, 2024 at 6:12 pm

    As organizations move from dabbling in generative artificial intelligence (AI) to building customer-facing applications, having a skilled team is crucial for project success. You can leverage AWS Training courses to upskill your team, ensuring they are prepared for your next generative AI project.

  • Beyond the basics: Build real-world gen AI skills with the latest learning paths from Google Cloud
    by (Training & Certifications) on October 16, 2024 at 1:00 pm

    November 15, 2024: The Cloud Innovators Plus program has evolved and is now the premium tier of the Google Developer Program. The majority of organizations don’t feel ready for the AI era. In fact, 62% say they don’t have the expertise they need to unlock AI’s full potential.1 As the leader of learning for Google Cloud, the only thing that surprises me about that number is how low it is. I meet with customers every day, and 100% of them flag some kind of AI skills gap.  Here’s the good news: that makes you — the developers, machine learning engineers, and data experts — invaluable. You are exactly the talent these organizations are looking for — but with the rate of change in AI, you have to stay on the cutting edge. A 2024 survey estimated that about 70% of AI talent needs to update their skills.2 And yet, many technical professionals don’t have the training they need to move from theory to practice and integrate AI into their everyday work.  That’s why, today, I’m proud to share with you the latest learning offerings on generative AI from Google Cloud Skills Boost. Say hello to four new learning paths designed to equip developers with real-world generative AI skills to build applications, manage and secure machine learning models, generate impactful content, and analyze data like a pro. We’re talking in-depth courses that first guide you through building proficiency and then, ultimately, test your skills in a real-life challenge lab. Get practical experience with gen AI use cases Generative AI is powerful, but to actually see value from this technology in real-world use cases takes practical experience and technical knowledge. These new learning paths from Google Cloud, listed below, give you the generative AI skills you need to complete innovative work in your current roles — like improving customer experience or team productivity. This also opens up new career (and promotion) opportunities. Once you complete the hands-on training, you will receive a skill badge to showcase your expertise on your resume or social media channels:  Learning Path: Build and modernize applications with generative AI: Learn how to enhance your projects and build end-to-end applications on Google Cloud with the power of generative AI. This path will guide you through essential techniques and tools to integrate gen AI capabilities seamlessly into your development workflow. Learning Path: Integrate generative AI into your data workflow: Learn how to use BigQuery Machine Learning for inference, work directly with Gemini models in BigQuery, and enhance your data team’s efficiency with Gemini's assistance. This path features a brand new course on boosting productivity with Gemini in BigQuery to aid in the data-to-AI pipeline. Learning Path: Deploy and manage generative AI models: Learn how to manage the entire lifecycle of generative AI models, from development and deployment to monitoring — including introductions to responsible AI for developers. This path features a brand new course on security for AI models. Learning Path: Generate smarter generative AI outputs: Learn how to build applications that generate text and visual content using generative AI. Develop an AI project on Google Cloud, use diffusion models for image generation, and build search applications with Vector Search and embeddings, then dive deeper into multimodal prompts and Multimodal RAG with Gemini. aside_block <ListValue: [StructValue([('title', 'What is a learning path and how do I get started?'), ('body', <wagtail.rich_text.RichText object at 0x3e94858efa00>), ('btn_text', ''), ('href', ''), ('image', None)])]> Innovators get full, no-cost access to gen AI learning paths Earlier this year, we announced that every member of the Google Cloud Innovators community, our no-cost developer program, receives 35 unrestricted learning credits every month to use on courses and labs in Google Cloud Skills Boost. That’s enough to complete one of these new learning paths every month. Join Innovators today to activate your credits and start learning! Innovators also get access to exclusive learning opportunities — like our latest challenge, AI Skills Quest, where you can immerse yourself in hands-on labs and earn skill AI badges alongside a cohort of like-minded peers. The latest gen AI learning content — from security to productivity  While you’re sharing your AI skills in the Innovators community, why not check out the very latest in learning? These new learning paths feature three brand new courses in security, data analytics and agent building — all hot off the presses in Google Cloud Skills Boost:  Course: Introduction to security in the world of AI. Whether you’re a security engineer, an IT leader, an AI developer, or a less technical leader, this course will help you build a foundational understanding of Google’s approach to navigating the intersection of AI and security.  Course: Boost productivity with Gemini in BigQuery. Accelerate your data-to-AI pipeline, write better code, and visualize workflows easily with Gemini in BigQuery. This course equips you or your team with practical skills to boost productivity and unlock the full potential of your data. Course: Build generative AI agents with Vertex AI and Flutter. Whether you're a seasoned app developer or just starting your journey with Flutter and Python, this course will help you build intelligent chat agents and fun, interactive experiences through generative AI.  Go from gen AI theory to practice Google has been at the forefront of AI innovation for over a decade. These learning paths are your direct line to that expertise, crafted by the very people who shaped the field. My team is committed to empowering you with the skills to lead in this exciting era. Because ultimately, we all have a shared goal: to build AI-driven solutions that are responsible, fun to use, and — above all — genuinely improve people’s lives.  Join the Google Cloud Innovators program, get your 35 free credits every month, and dive head first into our new generative AI learning paths today. 1.  Help Net Security, “The cloud skills gap is digital transformation's Achilles' heel,” Nov 14, 20232. Pluralsight, “Pluralsight AI skills report,” 2024

  • Launch your cloud career: A no-cost training and certification program for veterans
    by (Training & Certifications) on September 23, 2024 at 1:00 pm

    My father dedicated over 40 years to active duty in the Navy, and with my mother, instilled a strong sense of purpose in me and my two sisters. So joining the Navy felt like a natural choice, taking my oath alongside other young recruits who also valued the importance of having a purpose. As a woman, the leadership skills I gained in the Navy proved invaluable in navigating industries with underrepresented groups. Fast forward to today, and I'm proud to be part of Google Public Sector. My journey has shown me that veterans have so much to offer, yet the transition back to civilian life can be challenging. Research confirms that veterans, despite their qualifications and strong leadership abilities, are often undervalued in the civilian workforce. Google Cloud: A commitment to veterans At Google Cloud, we're determined to change this narrative. We believe veterans deserve a clear path to high-paying careers in cloud and AI. The demand for skilled professionals to lead digital transformations is high, and veterans have the dedication and leadership qualities to excel in these roles. My Navy experience, coupled with my technical background, showed me the impact of helping others and serving my country. I keep this in mind both at Google Public Sector and in my interactions with customers. That's why I'm proud to announce the launch of an important new program. Introducing Google Cloud Launchpad for Veterans  Google Cloud Launchpad for Veterans is a no-cost training and certification journey. It is designed to equip veterans in all roles and at all levels with the cloud knowledge and skills they need to drive innovation, and contribute to their current or future employer’s digital transformation strategy.  The three-week journey kicks off with a two-day virtual ‘Cloud Digital Leader’ training event on November 7th and 8th, delivered by ROI Training instructor and U.S. Marine Corp veteran Patrick Haggerty. You’ll enjoy interactive training sessions and a panel discussion with veterans from Google. After the virtual training event, you’ll receive a complimentary voucher for the Cloud Digital Leader exam. Attendees are encouraged to take the exam between November 15th - December 31st, 2024. (The first 500 to pass the exam will receive a voucher for their very own Google socks!) If you need extra practice, we're also offering optional exam prep sessions on November 15th and 22nd. This program goes beyond just certification. You'll gain the confidence to explain cloud fundamentals, identify the right Google Cloud solutions, and leverage cloud technology to drive innovation. You'll understand how to modernize infrastructure and applications, and you'll learn the essentials of cloud operations and security. Register today You served us, now let us serve you with a path to rewarding cloud and AI careers. Register today and translate your military experience to a powerful career in cloud.

  • The top AI courses for a summer of learning with Google Cloud
    by (Training & Certifications) on August 14, 2024 at 4:00 pm

    aside_block <ListValue: [StructValue([('title', 'Get hands-on generative AI experience for free'), ('body', <wagtail.rich_text.RichText object at 0x3e94837ce160>), ('btn_text', 'Get started for free'), ('href', 'https://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/generative'), ('image', None)])]> November 15, 2024: The Cloud Innovators Plus program has evolved and is now the premium tier of the Google Developer Program. Summer's well on its way, and it feels like it’s time for a road trip! But instead of just cruising down the highway, why not embark on a journey that supercharges your AI skills? Generative AI isn't just a buzzword; it's transforming industries. With Vertex AI, you can build applications that tailor experiences for users, automate processes and order flows, and enrich data alongside BigQuery and Cloud Run. That being said, I'm always on the lookout for helpful resources on building Gen AI products to share with my community. To help you make the most of this summer, I've crafted a learning roadmap using Google Cloud Skills Boost. It's designed to guide you from AI curiosity to capability, equipping you with the skills needed to excel in this dynamic field. So, are you ready for a summer learning journey?  Phone, check.  Keys, check.  Learning credits?   It costs nothing to join the no-cost Google Cloud Innovators program, where you receive 35 learning credits each month to use on courses, labs, and skill badges in Skills Boost. This means all the stops on our summer learning road trip are accessible to you at no cost. All right, let’s hit the road!   First stop is a low-code approach  These initial training courses lay the groundwork for understanding generative AI, from its core concepts to the responsible development of large language models (LLMs). You'll explore Google's tools for building your own AI applications and master the art of crafting effective prompts in Vertex AI. Training 1: Introduction to Generative AI: Get acquainted with the fundamental concepts of generative AI, and how to use it as a developer. Training 2: Introduction to Large Language Models (LLMs): Delve deeper into the world of LLMs, their applications, and the Google tools you can use to develop your own Generative AI apps. Training 3: Introduction to Responsible AI: It's not just about the tech itself; it's about responsible innovation. Learn to create AI systems that are fair, unbiased, and socially conscious. aside_block <ListValue: [StructValue([('title', "Debi's Pro Tip:"), ('body', <wagtail.rich_text.RichText object at 0x3e94837ce430>), ('btn_text', ''), ('href', ''), ('image', None)])]> Training 4: Prompt Design in Vertex AI: Learn prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI. aside_block <ListValue: [StructValue([('title', "Debi's Pro Tip:"), ('body', <wagtail.rich_text.RichText object at 0x3e94837ce4c0>), ('btn_text', ''), ('href', ''), ('image', None)])]> Shift into high gear with AI engineering This section takes you beyond the basics, diving into the powerful tools and techniques that drive AI engineering. You'll gain hands-on experience building applications with Gemini and Streamlit, explore the fascinating world of image generation, and unlock the full potential of multimodal AI with Gemini. Training 5: Introduction to Vertex AI Studio: Familiarize yourself with Vertex AI Studio, your control center for building Gemini multimodal applications, designing prompts, and fine-tuning models. Training 6: Develop GenAI Apps with Gemini and Streamlit: Build interactive, user-friendly apps powered by Gemini with the Vertex AI Gemini API and Python SDK, and learn how to deploy a Steamlit app integrated with Gemini on Cloud Run aside_block <ListValue: [StructValue([('title', "Debi's Pro Tip:"), ('body', <wagtail.rich_text.RichText object at 0x3e94837ceb20>), ('btn_text', ''), ('href', ''), ('image', None)])]> Training 7: Introduction to Image Generation: Discover how to generate images with AI using diffusion models, and how to train and deploy them on Vertex AI. Training 8: Explore Generative AI with the Vertex AI Gemini API: Learn text generation, image and video analysis for content creation, and function calling techniques within the Gemini API for Vertex AI. Training 9: Multimodality with Gemini: Harness the power of multimodal prompts to extract insights from text and visual data. Generate video descriptions and uncover hidden details in videos. Navigate machine learning and hit the gas At this stop you'll learn how to harness the power of Vertex AI and BigQuery to build, deploy, and leverage machine learning models, extracting valuable insights from vast datasets. Training 10: Build and Deploy Machine Learning Solutions on Vertex AI: Turn your ideas into reality. Learn how to take your models from concept to deployment using Vertex AI and AutoML. Training 11: BigQuery for Machine Learning: BigQuery is more than just a data warehouse. Leverage its vast datasets to build, train, evaluate, and predict with your own machine learning models. aside_block <ListValue: [StructValue([('title', "Debi's Pro Tip:"), ('body', <wagtail.rich_text.RichText object at 0x3e94837ce790>), ('btn_text', ''), ('href', ''), ('image', None)])]> Your no-cost AI summer road trip starts now Embarking on your AI summer learning road trip has never been easier. Remember, the Google Cloud Innovators program is a no-cost way to receive 35 learning credits each month to use on these courses, labs, and skill badges in Skills Boost.  As you level up your skills, be sure to share your progress with the world! Earn skill badges by completing courses and labs and validate them through Credly to proudly display your progress on your preferred professional social network.  Don’t forget to take a pit stop at the Google Cloud Skills Boost Arcade, where you can translate your progress into exciting badges and exclusive prizes.  This summer, let Google Cloud Skills Boost be your compass as you skill up in AI. Happy learning!

  • Modern SecOps Masterclass: Now Available on Coursera
    by (Training & Certifications) on July 18, 2024 at 4:00 pm

    Security practitioners constantly need to rethink and refine their approaches to defending their organization. Staying ahead requires innovation, continuous improvement, and a mindset shift away from siloed operations into building end-to-end solutions against threats.  Today, Google Cloud is excited to announce the launch of the Modern SecOps (MSO) course, a six-week, platform-agnostic education program designed to equip security professionals with the latest skills and knowledge to help modernize their security operations, based on our Autonomic Security Operations framework and Continuous Detection, Continuous Response (CD/CR) methodology.  Introducing Modern Security Operations Course The Modern Security Operations course provides a comprehensive curriculum that addresses the core challenges faced by today’s security operations teams, predominantly focused on improving people and processes. Developed in collaboration with ROI Training, Netenrich, and other leading industry experts, this course offers practical insights and hands-on experience to help organizations transform their Security Operations Centers (SOCs). To learn more about ROI Training and our Google Cloud courses, see their catalog here. To learn more about Netenrich and their approach towards Autonomic Security Operations, see their case studies here. "Autonomic Security is the guiding star for transforming Security Operations Centers, and we're thrilled to partner with Google Cloud to develop this course. Netenrich Adaptive MDR, built on the ASO framework, exemplifies our commitment to pioneering autonomic security solutions,” said Raju Chekuri, CEO, Netenrich. “By implementing ASO both internally and for our customers, we're turning the vision of autonomic security into reality." Course highlights The MSO course’s six-week curriculum focuses on:  Modernizing Cyber Threat management: Gain an understanding of the evolving cybersecurity landscape and the future of security operations. SecOps 101: Learn the fundamental concepts and components of Security Operations, including detection, triage, and incident response. Principles of Autonomic Security Operations: Discover how to apply lessons from DevOps and Site Reliability Engineering to SecOps. Continuous Detection and Continuous Response (CD/CR): Implement agile methodologies to reduce toil, improve threat management and response capabilities. Modern SecOps Maturity Discovery Tool: Use our MSO Discovery tool to benchmark your organization's maturity against the CD/CR methodology. This course is tailored for: Security Operations Analysts who want to enhance their threat detection and response skills. SOC managers who are eager to learn how to modernize and streamline their Security Operations Center. CISOs who are looking to gain strategic insights to transform their organization’s security operations. Participants in the course will gain access to a wealth of knowledge and practical tools that can help streamline security operations through automation; address and overcome technology and process challenges;and achieve significant improvements in operational efficiency and effectiveness. Complimenting your training with Google SecOps In the generative AI era, security teams require fully operational, high-performing solutions that drive productivity and empower defenders. Google Security Operations is a unified, intelligence-driven and AI-powered platform designed to simplify threat detection, investigation, and response.  Our platform can help reduce the complexity of SecOps and enhance the productivity of Security Operations Centers, and features innovations such as frontline Threat Intelligence, Gemini, Investigation Assistant, Playbook Assistant, and autonomous parsers. These advanced capabilities can enable security teams to uncover threats with less effort, streamline workflows, and also accelerate their journey towards modern SecOps. You can explore how our platform can help you realize these benefits faster here. Enroll today Take the first step towards transforming your security operations: Learn more and register for the Modern Security Operations course. aside_block <ListValue: [StructValue([('title', 'Modern Security Operations: Available on Coursera'), ('body', <wagtail.rich_text.RichText object at 0x3e9462a4e370>), ('btn_text', 'Enroll Now'), ('href', 'https://www.coursera.org/learn/modern-security-operations'), ('image', None)])]>

  • 5 more myths about platform engineering: how it’s built, what it does, and what it doesn’t
    by (Training & Certifications) on June 6, 2024 at 4:00 pm

    In an earlier post, we discussed some persistent myths about platform engineering — what it is, what it isn’t, and ways in which you’re already performing core platform engineering tasks. Here, we will cover five more myths, this time about how platforms are built, what they do, and what they don't.  6. MYTH: Platform engineering eliminates the need for infrastructure teams Even if you have the best developer platform on the planet, it still runs on top of complex infrastructure, which will always require ongoing maintenance by specialists who understand it. After all, someone needs to architect, manage, scale, troubleshoot, and optimize that infrastructure. And try as you might, that infrastructure will continue to fail just as it did before you introduced platform engineering. A common mistake is to eliminate the infrastructure team and to expect a totally new team to make up for that loss. Infrastructure teams already have the expertise to handle these responsibilities, and as such,  are good candidates to become platform engineers. By using the team with institutional knowledge of the underlying infrastructure, you're more likely to adapt your current system into a viable, engineered platform. However, what platform engineering does change is how infrastructure specialists prepare for and respond to failures, as the platform engineering role is more focused on platform development, and less on manual, repetitive tasks. So while platform engineering changes the nature of infrastructure work, it doesn’t eliminate it altogether. You still need to build a self-service catalog of golden paths that developers can select to deploy their applications. That catalog needs to be documented and refined, advocated for within the organization, and introduced to new engineers. Improvements to the platform also need to be rolled out to existing tenants. Scale and security are always a source of new issues. Infrastructure experts are extremely valuable members of any IT staff; allowing them to codify their knowledge into a platform is essential to an organization looking to succeed at software delivery. Finally, even the most mature platforms have components that fall outside the scope of automation, and infrastructure experts will still be responsible for them. And that’s OK, because they understand this work firsthand, so are better able to prioritize which features to add to the platform engineering product backlog. New systems come online and evolve, cloud providers expand their offerings, but the platform is never done. 7. MYTH: Introducing platform engineering will dramatically impact staffing costs Part of building a platform engineering team is taking the people with the most DevOps skills within an organization and evolving into the new structure. This allows them and the organization to better apply DevOps principles with fewer people, using  self-service automation and golden paths. A common concern is that a platform engineering team will require a lot of additional personnel. A platform engineering team indeed needs to be staffed, but that staff can come from existing operations and software engineering teams. Over time, the resulting platform should more than pay for itself by leveraging gains from shared services. In other words, the platform engineering team is an investment that you can fund from existing in-house teams. One model to consider is Google SRE's history of sublinear scaling, where the teams responsible for ensuring availability set objectives to grow their headcount at a lower growth rate than the system they run. When introducing platform engineering, an antipattern would be to expect a reduction of operations staff or developers out of the gate. Retraining existing teams works well because they're already familiar with your business needs, and have a lot of experience with the underlying infrastructure, whether it’s exposed directly or via a platform. In fact, we observe that teams that adopt platform engineering end up finding that more work can be done by the same individuals because there’s a platform that they can leverage. When implemented correctly, platform engineering increases efficiency and reduces operational overhead: Automating deployment pipelines, infrastructure, and configuration reduces manual work. The self-service model reduces bottlenecks, as there’s minimal intervention required from operations teams Workflows become streamlined, allowing teams to do more with the same (or potentially even fewer) resources. And while you may need to do some initial upskilling or hiring, over time, the transition to platform engineering unlocks long-term efficiency by applying platform expertise across the organization. 8. MYTH: Adopting platform engineering today will quickly solve all my biggest problems In any complex environment, hoping for a quick-fix is almost always wishful thinking. Change takes time, and the timeline for that change needs to account for identifying your organization's constraints and how quickly it can curate relevant solutions. Nor is there a one-size-fits-all approach to platform engineering; it needs to be tailored to meet the specific needs of your organization. However, you *can* achieve faster results by building out a minimal viable platform (MVP), starting from a subset of your user base and creating a fast feedback loop. Starting with some pre-made MVPs can help to bootstrap a team, but it is important to not make the mistake and think that you can "buy your way out of this" by adopting a fully-built platform, and presto! Improving immediately. Investment, research, and introspection are the right path forward here. By starting with an MVP and adding capabilities based on early adopters’ feedback, you can iteratively build a platform that starts delivering value quickly.  Don't try to design the perfect platform with a five year plan. In short, platform engineering is a journey that requires a change in mindset across development and operations, a cultural shift to embrace the platform, golden path engineering, and tooling to address friction in the development process. All of which takes time to get it right. 9. MYTH: You should apply platform engineering practices to every application Platform engineers actively analyze and identify tasks or processes which create a high cognitive load on development and operations teams, taking targeted actions to alleviate the burden. That does not describe all tasks and processes within a software delivery organization. As such, consider applying platform engineering to applications where developers are overwhelmed by infrastructure complexities, or the operations team faces constant friction. In these situations, a "golden path" approach can streamline development and management. This typically involves selecting suitable cloud services, automating deployments, and establishing standardized configurations. First, focus on abstracting things that have the highest usage and toil, i.e., services that both take a high cognitive load and are frequently used. Prioritizing these systems allows the benefits of the platform to be realized sooner. Make sure your abstractions provide value, sensible defaults, along with guidance and explanations for why you made certain choices. Having "break-glass" methods for stepping outside the platform if needed is highly encouraged. At least initially, think in terms of building a platform for depth rather than breadth.  Satisfy and automate common use-cases as completely as you can before moving on to new ones. Similarly, don't start with the biggest, most important service to your organization. An antipattern is to adopt the "biggest bang" application first, to maximize gain over time.  This is likely to fail, as teams haven’t had the time to develop confidence in your nascent platform, or the platform doesn’t yet have the requisite capabilities. Instead, start with smaller, less-demanding services. A team doesn’t need to deploy every service when adopting the platform. You can aim to adopt some large percentage of them, but there will always be "strays" that might require a separate approach. As long as the discussion happens and is documented for future re-evaluation, don't worry too much about that. 10. MYTH: All cloud services map to platform engineering When people begin their platform engineering journey, they often ask us "does this cloud service map to platform engineering?” Don’t mistake adopting a cloud service for practicing platform engineering. This misunderstanding hinders effective implementation, and suggests that there’s an unclear understanding of what platform engineering actually is.While you can use any cloud service with platform engineering, what matters is how you integrate that cloud service into your developer experience through the platform. Let’s briefly revisit core platform engineering practices and processes, so that you can decide for yourself whether a cloud service or product is a fit for your platform. DevOps practices used for platform engineering Example processes  1. Developer-centric approach Measuring developer experience (DX) Golden paths Self-service capabilities 2. Automation and Infrastructure as Code (IaC) Automate everything Infrastructure as Code tooling 3. Security and compliance Security by design Guardrails Compliance as Code 4. Observability Centralized monitoring Alerting Troubleshooting tools 5. Continuous improvement Metrics-driven approach Feedback loops Learning from incidents Your next steps with platform engineering Over the course of this blog, you’ve learned that platform engineering is a new approach to managing IT infrastructure and software development. It aims to streamline the software development process by providing developers with self-service tools and platforms, abstracting away complex infrastructure details, and automating repetitive tasks. While it builds on existing practices like DevOps and automation, it is worth considering this on its own to ensure the most benefit for teams. Key takeaways: Platform Engineering is a natural evolution of DevOps, aiming to address the challenges of modern software development at scale. It's not a one-size-fits-all solution and requires a tailored approach to meet your organization's specific needs. Start small with a minimal viable platform, prioritize high-value tasks, and iterate based on feedback to build a platform that truly delivers value to your developers and organization. Keep reading about golden paths and laying the foundation for a career in platform engineering. Also check out recorded talks from PlatformCon. Last but not least, be sure to contribute to the annual DORA survey!

  • All Google Cloud courses and labs are now available at no cost through Innovators
    by (Training & Certifications) on June 4, 2024 at 4:00 pm

    November 15, 2024: The Cloud Innovators Plus program has evolved and is now the premium tier of the Google Developer Program. As the managing director of Google Cloud Learning, I see firsthand how cloud developers with a drive for continued education come out on top. Google Cloud Certified professionals are the highest paid in the industry; Google offers 7 of the 10 top-paying IT certifications globally1. We want to make it easier than ever for you to tap into that potential to earn more.  That's why, this year at I/O, Google announced that every member of the Google Cloud Innovators community, our no-cost developer program, is now granted 35 unrestricted learning credits every month to use on courses and hands-on labs through Google Cloud Skills Boost. If you haven’t already, you can join Innovators today to get started. The best part: these credits will continue to renew every month, so you can keep learning and earning skill badges.  Learn your way, at your pace With credits that renew each month, you can dive deep into specific areas of interest or explore a variety of on-demand topics to expand your knowledge. Whether you want to become proficient in generative AI or gain a broader understanding of cloud technologies, the choice is yours. Courses on Google Cloud Skills Boost are on-demand and feature hands-on labs, so you can gain the skills you need to tackle real-world challenges and make an impact in your career.  Showcase your skills with shareable skill badges It’s never been easier to get credentialed in Google Cloud tech. Google Cloud skill badges are designed for all levels of developers, and cover a wide range of topics, from generative AI and data engineering, to security. When you join the no-cost Innovators community, you’ll gain enough learning credits to earn a Google-verified skill badge every month. As you build your cloud skills, stand out by sharing your skill badges with your professional network through Credly, where your credentials are officially verified by Google and collected in your mobile wallet. Become an Innovator today If you’re looking for a community where you can learn and grow your cloud skills, join Google Cloud Innovators. As an Innovator, you can network with peers and to stay up to date in the evolving world of cloud technology—at no cost.  Become a member and start your learning journey today. You’ll be automatically granted 35 learning credits for Google Cloud Skills Boost as soon as you join. 1. Skillsoft “IT Skills & Salary Report,” 18th Edition, 2023

  • Google Cloud offers new AI, cybersecurity, and data analytics training to unlock job opportunities
    by (Training & Certifications) on April 15, 2024 at 2:30 pm

    Google Cloud is on a mission to help everyone build the skills they need for in-demand cloud jobs. Today, we're excited to announce  new learning opportunities  that will help you gain these in-demand skills through new courses and certificates in AI, data analytics, and cybersecurity. Even better, we’re hearing from Google Cloud customers that they are eager to consider certificate completers for roles they’re actively hiring for, so don’t delay and start your learning today. Google Cloud offers new generative AI courses Introduction to Generative AI aside_block <ListValue: [StructValue([('title', 'Get hands-on experience for free'), ('body', <wagtail.rich_text.RichText object at 0x3e946816e3a0>), ('btn_text', 'Start building for free'), ('href', 'http://console.cloud.google.com/freetrial?redirectPath=/welcome/'), ('image', None)])]> Demand for AI skills is exploding in the market. There has been a staggering 21x increase in job postings that include AI technologies in 2023.1 To help prepare you for these roles, we’re announcing new generative AI courses on YouTube and Google Cloud Skills Boost, from introductory level to advanced. Once you complete the hands-on training, you can show off your new skill badges to employers. Introductory (no cost!): This training will get you started with the basics of generative AI and responsible AI.   Intermediate: For Application Developers, and you will learn how to use Gemini for Google Cloud to work faster across networking, security, and infrastructure. Advanced: For AI/ ML Engineers, and you will learn how to integrate multimodal prompts in Gemini into your workflow.  New AI-powered, employer-recognized Google Cloud Certificates Gen AI has triggered massive demand for skilling, especially in the areas of cybersecurity and data analytics,2 where there are significant employment opportunities. In the U.S. alone: There are over 505,000 open entry-level roles3 related to a Cloud Cybersecurity Analyst, with a median annual salary of $135,000.4 There are more than 725,000 open entry-level roles5 related to a Cloud Data Analyst, with a median annual salary of $85,700.6 Building on the success of the Grow with Google Career Certificates, our new Google Cloud Certificates in Data Analytics and Cybersecurity can help prepare you for these high-growth, entry-level cloud jobs. A gen AI-powered learning journey  What better way to understand just how much AI can do for you than integrating it into your learning journey? You’ll get no-cost access to generative AI tools throughout your learning experience. For example, you can put your skills to use and rock your interviews with Interview Warmup, Google’s gen AI-powered interview prep tool. Talent acquisition, reimagined  And while we're at it, we'll help connect you to jobs. Our new Google Cloud Affiliate Employer program unlocks access for certificate completers to apply for jobs with some top cloud employers, like the U.S. Department of the Treasury, Rackspace, and Jack Henry. We’re also taking it one step further. Together, with the employers in the affiliate program, we're helping reimagine talent acquisition through a new skills-based hiring effort. This new initiative uses Google Cloud technology to help move certificate completers through the hiring process. Here’s how it works: Certificate completers in select hiring locations will have the chance to take custom labs that represent on-the-job scenarios, specific to each employer partner. These labs will be considered the first stage in their hiring process. By matching candidates with the right skills to the right jobs, this initiative marks a major step forward in creating more access to job opportunities for cloud employers. The U.S. Department of the Treasury will start using these new Google Cloud Certificates and labs for cyber and data analytics talent identification across the federal agency, per President Biden's Executive Order on AI. “In an age of rapid innovation and adoption of new technology offering the promise of improved productivity, it is imperative that we equip every worker with accessible training and development opportunities to understand and apply this new technology. We are partnering with Google to provide the new Cloud Certificates training for our current and future employees to accelerate their careers in cybersecurity and data analytics.” - Todd Conklin, Chief AI Officer and Deputy Assistant Secretary of Cybersecurity and Critical Infrastructure Protection, U.S. Department of the Treasury  No-cost access for higher education institutions worldwide To expand access to these programs, educational institutions, as well as government and nonprofit workforce development programs across the globe, can offer these new certificates and gen AI courses at no cost. Learn more and apply today. And in the U.S., learners who successfully complete a Google Cloud Certificate can apply for college credit,7 to have a faster and more affordable pathway to a degree. “Purdue Global's students have benefited greatly from the strong working relationship between Purdue Global and Google. Together, they were the pioneers in stacking Grow with Google certificates into four types of degree-earning credit certificates over the past two years. We believe these new Google Cloud Cybersecurity and Data Analytics Certificates will equip our working adult learners with the essential skills to move forward and succeed in today’s cloud-driven market.” - Frank Dooley, Chancellor of Purdue Global  Take the next steps to upskill and identify cloud-skilled talent   We're helping to prepare new-to-industry talent for the most in-demand cloud jobs, expanding access to these opportunities globally, and pioneering a skills-based hiring effort with employers eager to hire them. Here's how you can get started: Learners: Preview the courses and certificates on Google Cloud YouTube and earn the full credential on Google Cloud Skills Boost to give yourself a headstart in the race to hire AI talent.   Higher education institutions and government / nonprofit workforce programs: Apply today to skill up your workforce at no cost.  Employers: Express interest to become a Google Cloud Affiliate Employer and be considered for our skills-based hiring pilot to connect with cloud-skilled talent. 1. LinkedIn, Future of Work Report (2023)2. CompTIA Survey (Feb 2024)3. U.S. Bureau of Labor Statistics (2024)4. (ISC)2 Cybersecurity Workforce Study (2022)5. U.S. Bureau of Labor Statistics (2024) 6. U.S. Bureau of Labor Statistics (2024) 7. The Google Cloud Certificates offer a recommendation from the American Council on Education® of up to 10 college credits.


Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03

How do we know that the Top 3 Voice Recognition Devices like Siri Alexa and Ok Google are not spying on us?

Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud

Djamgatech: AI Driven Continuing Education and Certification Preparation Platform

Djamgatech – Multilingual and Platform Independent Cloud Certification and Education App for AWS Azure Google Cloud

Djamgatech is the ultimate Cloud Education Certification App. It is an EduFlix App for AWS, Azure, Google Cloud Certification Prep, School Subjects, Python, Math, SAT, etc. [Android, iOS]

Technology is changing and is moving towards the cloud. The cloud will power most businesses in the coming years and is not taught in schools. How do we ensure that our kids and youth and ourselves are best prepared for this challenge?

Building mobile educational apps that work offline and on any device can help greatly in that sense.

The ability to tab on a button and learn the cloud fundamentals and take quizzes is a great opportunity to help our children and youth to boost their job prospects and be more productive at work.

The App covers the following certifications :
AWS Cloud Practitioner Exam Prep CCP CLF-C01, Azure Fundamentals AZ 900 Exam Prep, AWS Certified Solution Architect Associate SAA-C02 Exam Prep, AWS Certified Developer Associate DVA-C01 Exam Prep, Azure Administrator AZ 104 Exam Prep, Google Associate Cloud Engineer Exam Prep, Data Analytics for AWS DAS-C01, Machine Learning for AWS and Google, AWS Certified Security – Specialty (SCS-C01), AWS Certified Machine Learning – Specialty (MLS-C01), Google Cloud Professional Machine Learning Engineer and more… [Android, iOS]

Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud
Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud

The App covers the following cloud categories:

AWS Technology, AWS Security and Compliance, AWS Cloud Concepts, AWS Billing and Pricing , AWS Design High Performing Architectures, AWS Design Cost Optimized Architectures, AWS Specify Secure Applications And Architectures, AWS Design Resilient Architecture, Development With AWS, AWS Deployment, AWS Security, AWS Monitoring, AWS Troubleshooting, AWS Refactoring, Azure Pricing and Support, Azure Cloud Concepts , Azure Identity, governance, and compliance, Azure Services , Implement and Manage Azure Storage, Deploy and Manage Azure Compute Resources, Configure and Manage Azure Networking Services, Monitor and Backup Azure Resources, GCP Plan and configure a cloud solution, GCP Deploy and implement a cloud solution, GCP Ensure successful operation of a cloud solution, GCP Configure access and security, GCP Setting up a cloud solution environment, AWS Incident Response, AWS Logging and Monitoring, AWS Infrastructure Security, AWS Identity and Access Management, AWS Data Protection, AWS Data Engineering, AWS Exploratory Data Analysis, AWS Modeling, AWS Machine Learning Implementation and Operations, GCP Frame ML problems, GCP Architect ML solutions, GCP Prepare and process data, GCP Develop ML models, GCP Automate & orchestrate ML pipelines, GCP Monitor, optimize, and maintain ML solutions, etc.. [Android, iOS]

Cloud Education and Certification


Custom AI Chatbot

Imagine a 24/7 virtual assistant that never sleeps, always ready to serve customers with instant, accurate responses.

Contact us here to book a demo and receive a personalized value proposition



GeoVision AI

We combine the power of GIS and AI to deliver instant, actionable intelligence for organizations that rely on real-time data gathering. Our unique solution leverages 🍇 GIS best practices and 🍉 Power Automate for GIS integration to collect field data—texts, photos, and geolocation—seamlessly. Then, through 🍊 Generative AI for image analysis, we deliver immediate insights and recommendations right to your team’s inbox and chat tools.

Contact us here to book a demo and receive a personalized value proposition


The App covers the following Cloud Services, Framework and technologies:

AWS: VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, SES, EBS, ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, Simply Monthly Calculator, cost calculator, Ec2 pricing on-demand, IAM, AWS Pricing, Pay As You Go, No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Workspace, S3 storage classes, Regions, Availability Zones, Placement Groups, Amazon lightsail, Redshift, EC2 G4ad instances, DAAS, PAAS, IAAS, SAAS, NAAS, Machine Learning, Key Pairs, AWS CloudFormation, Amazon Macie, Amazon Textract, Glacier Deep Archive, 99.999999999% durability, AWS Codestar, Amazon Neptune, S3 Bucket, EMR, SNS, Desktop As A Service, Emazon EC2 for Mac, Aurora Postgres SQL, Kubernetes, Containers, Cluster.

Azure: Virtual Machines, Azure App Services, Azure Container Instances (ACI), Azure Kubernetes Service (AKS), and Windows Virtual Desktop, Virtual Networks, VPN Gateway, Virtual Network peering, and ExpressRoute, Container (Blob) Storage, Disk Storage, File Storage, and storage tiers, Cosmos DB, Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, and SQL Managed Instance, Azure Marketplace, Azure consumption-based mode, management groups, resources and RG, Geographic distribution concepts such as Azure regions, region pairs, and AZ Internet of Things (IoT) Hub, IoT Central, and Azure Sphere, Azure Synapse Analytics, HDInsight, and Azure Databricks, Azure Machine Learning, Cognitive Services and Azure Bot Service, Serverless computing solutions that include Azure Functions and Logic Apps, Azure DevOps, GitHub, GitHub Actions, and Azure DevTest Labs, Azure Mobile, Azure Advisor, Azure Resource Manager (ARM) templates, Azure Security, Privacy and Workloads, General security and network security, Azure security features, Azure Security Centre, policy compliance, security alerts, secure score, and resource hygiene, Key Vault, Azure Sentinel, Azure Dedicated Hosts, Concept of defense in depth, NSG, Azure Firewall, Azure DDoS protection, Identity, governance, Conditional Access, Multi-Factor Authentication (MFA), and Single Sign-On (SSO),Azure Services, Core Azure architectural components, Management Groups, Azure Resource Manager,

Google Cloud Platform: Compute Engine, App Engine, BigQuery, Bigtable, Pub/Sub, flow logs, CORS, CLI, pod, Firebase, Cloud Run, Cloud Firestore, Cloud CDN, Cloud Storage, Persistent Disk, Kubernetes engine, Container registry, Cloud Load Balancing, Cloud Dataflow, gsutils, Cloud SQL,

2022 AWS Cloud Practitioner Exam Preparation

Cloud Education Certification: Eduflix App for Cloud Education and Certification (AWS, Azure, Google Cloud) [Android, iOS]

Features:
– Practice exams
– 1000+ Q&A updated frequently.
– 3+ Practice exams per Certification
– Scorecard / Scoreboard to track your progress
– Quizzes with score tracking, progress bar, countdown timer.
– Can only see scoreboard after completing the quiz.
– FAQs for most popular Cloud services
– Cheat Sheets
– Flashcards
– works offline

Note and disclaimer: We are not affiliated with AWS, Azure, Microsoft or Google. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

Azure Administrator AZ-104 Exam Questions and Answers Dumps

Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

GCP, Google Cloud Platform, has been a game changer in the tech industry. It allows organizations to build and run applications on Google’s infrastructure. The GCP platform is trusted by many companies because it is reliable, secure and scalable. In order to become a GCP certified professional, one must pass the GCP Professional Architect exam. The GCP Professional Architect exam is not easy, but with the right practice questions and answers dumps, you can pass the GCP PA exam with flying colors.

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761

The Google Certified Cloud Professional Architect Exam assesses your ability to:

  • Design and plan a cloud solution architecture
  • Manage and provision the cloud solution infrastructure
  • Design for security and compliance
  • Analyze and optimize technical and business processes
  • Manage implementations of cloud architecture
  • Ensure solution and operations reliability
  • Designing and planning a cloud solution architecture

The Google Certified Cloud Professional Architect covers the following topics:

Designing and planning a cloud solution architecture: 36%

This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.

Managing and provisioning a solution Infrastructure: 20%


Custom AI Chatbot

Imagine a 24/7 virtual assistant that never sleeps, always ready to serve customers with instant, accurate responses.

Contact us here to book a demo and receive a personalized value proposition



GeoVision AI

We combine the power of GIS and AI to deliver instant, actionable intelligence for organizations that rely on real-time data gathering. Our unique solution leverages 🍇 GIS best practices and 🍉 Power Automate for GIS integration to collect field data—texts, photos, and geolocation—seamlessly. Then, through 🍊 Generative AI for image analysis, we deliver immediate insights and recommendations right to your team’s inbox and chat tools.

Contact us here to book a demo and receive a personalized value proposition


This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.

Designing for security and compliance: 12%

This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.

Managing implementation: 10%

This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).

Ensuring solution and operations reliability: 6%

This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.

Analyzing and optimizing technical and business processes: 16%

This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.

Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps that will help you ace the GCP Professional Architect exam:

You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C

Question 1:  Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?

 A. Have the vehicles in the field stream the data directly into BigQuery.

B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.

D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.

ANSWER1:

D

Notes/References1:

D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.

Reference: Streaming insertsApache Hadoop and Spark10 tips for building long running cluster using cloud dataproc

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary - $175,761

Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?

A. Execute queries against data stored in a Cloud SQL.

B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.

C. Execute queries against data stored on daily partitioned BigQuery tables.

D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.

ANSWER2:

B

Notes/References2:

B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.

Reference: BigTables time series clusterBigQuery

Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?

A. Use multiple connectivity subsystems for redundancy. 

B. Require IPv6 for connectivity to ensure a secure address space. 

C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.

D. Use a functional programming language to isolate code execution cycles.

E. Treat every microservice call between modules on the vehicle as untrusted.

F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.

ANSWER3:

E and F

Notes/References3:

E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.

F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.

Reference 3: Trusted Platform Module

Question 4: For this question, refer to the Company A case study.

Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

A. OpEx/CapEx allocation, LAN change management, capacity planning

B. Capacity planning, TCO calculations, OpEx/CapEx allocation 

C. Capacity planning, utilization measurement, data center expansion

D. Data center expansion, TCO calculations, utilization measurement

ANSWER4:

B

Notes/References4:

B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.

Reference: Cloud Economics

[appbox appstore 1574395172-iphone screenshots]
[appbox googleplay com.gcpacepro.enoumen]
[appbox appstore 1560083470-iphone screenshots]
[appbox googleplay com.coludeducation.quiz]

Question 5: For this question, refer to the Company A case study.

You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

ANSWER5:

C

Notes/References5:

C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.

Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?

A. RPM/DEB

B. Containers 

C. Chef/Puppet

D. Virtual machines

ANSWER6:

B

Notes/References6:

B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.

Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?

A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table. 

C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.

ANSWER7:

C

Notes/References7:

C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.

Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?

A. Load logs into BigQuery. 

B. Load logs into Cloud SQL.

C. Import logs into Stackdriver. 

D. Insert logs into Cloud Bigtable.

E. Upload log files into Cloud Storage.

ANSWER8:

A and E

Notes/References8:

A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.

E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.

References: BigQueryStackDriverBigTableStorage Class: ColdLine

Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. 

B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.

C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

ANSWER9:

C

Notes/References9:

C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.

Reference: Load balancingLoad Balancing Health Checking

Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

A. Add each tier to a different subnetwork.

B. Set up software-based firewalls on individual VMs. 

C. Add tags to each tier and set up routes to allow the desired traffic flow.

D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

ANSWER10:

D

Notes/References10:

D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.

Reference: Using VPCRoutesAdd Remove Network

Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?

A. Use gsutil.

B. Use gcloud.

C. Use GCS REST API. 

D. Use Storage Transfer Service.

ANSWER11:

A

Notes/References11:

A is correct because gsutil gives you access to write data to Cloud Storage.

Reference: gsutilsgcloud sdkcloud storage json apiuploading objectsstorage transfer

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary - $175,761
https://apps.apple.com/ca/app/djamgatech-pro/id1574297762

Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?

A. Encrypt the message client-side using block-based encryption with a shared key.

B. Tag messages client-side with the originating user identifier and the destination user.

C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server. 

D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.

ANSWER12:

D

Notes/References12:

D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.

Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?

A. In the source code

B. In an environment variable 

C. In a key management system

D. In a config file that has restricted access through ACLs

ANSWER13:

C

Notes/References13:

Question 14: For this question, refer to the Company B case study.

Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL

B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery 

C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

ANSWER14:

B

Notes/References14:

B is correct because:
Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers.
Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices.
Cloud Pub/Sub can ingest the streaming data from the mobile users.
BigQuery can query more than 10 TB of historical data.

References: GCP QuotasBeam Apache WindowingBeam Apache TriggersBigQuery External Data SolutionsApache Hive on Cloud Dataproc

Question 15: For this question, refer to the Company B case study.

Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.

ANSWER15:

A

Notes/References15:

A is correct because simulating production load in GCP can scale in an economical way.

Reference: Load Testing iot using gcp and locustDistributed Load Testing Using Kubernetes

Question 16: For this question, refer to the Company B case study.

Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:

  • Services are deployed redundantly across multiple regions in the US and Europe
  • Only frontend services are exposed on the public internet.
  • They can reserve a single frontend IP for their fleet of services.
  • Deployment artifacts are immutable

Which set of products should they use?

A. Cloud Storage, Cloud Dataflow, Compute Engine

B. Cloud Storage, App Engine, Cloud Load Balancing

C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing

D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager

ANSWER16:

C

Notes/References16:

C is correct because:
Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers.
Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose.
Container Registry is a single place for a team to manage Docker images for the services.

References: Load Balancing https – load balancing overview GCP lb global forwarding rulesreserve static external ip addressbest practice for operating containerscontainer registrydataflowcalling https

Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

A. Org viewer, Project owner

B. Org viewer, Project viewer 

C. Org admin, Project browser

D. Project owner, Network admin

ANSWER17:

B

Notes/References17:

B is correct because:
Org viewer grants the security team permissions to view the organization’s display name.
Project viewer grants the security team permissions to see the resources within projects.

Reference: GCP Resource Manager – User Roles

Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?

A. Use persistent disks to store the state. Start and stop the VM as needed. 

B. Use the –auto-delete flag on all persistent disks before stopping the VM. 

C. Apply VM CPU utilization label and include it in the BigQuery billing export.

D. Use BigQuery billing export and labels to relate cost to groups. 

E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.

ANSWER18:

A and D

Notes/References18:

A is correct because persistent disks will not be deleted when an instance is stopped.

D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.

References: GCP instances life cycleGCP instances set disk auto deleteGCP Local Data PersistanceGCP export data BigQueryGCP Creating Managing Labels

Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?

A. Configure a new load balancer for the new version of the API.

B. Reconfigure old clients to use a new endpoint for the new API. 

C. Have the old API forward traffic to the new API based on the path.

D. Use separate backend services for each API path behind the load balancer.

ANSWER19:

D

Notes/References19:

D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.

References: load balancing httpsload balancing backendGCP lb global forwarding rules

Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?

A. Increase the virtual machine’s memory to 64 GB.

B. Create a new virtual machine running PostgreSQL. 

C. Dynamically resize the SSD persistent disk to 500 GB.

D. Migrate their performance metrics warehouse to BigQuery.

ANSWER20:

C

Notes/References20:

C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.

References: GCP compute disks pdsspecsGCP Compute Disks Performances

Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Google’s Cloud CDN.

B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

C. Do nothing.

D. Use global BigTable storage.

E. Use a global Cloud Spanner instance.

F. Migrate the data to a new multi-regional GCS bucket.

G. Change the storage class to multi-regional.

ANSWER21:

A

Notes/References21:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?

A. Python, Flask, App Engine Standard

B. Ruby, Nginx, GKE

C. HTML, CSS, Cloud Storage

D. Node.js, Express, Cloud Functions

E. Rust, Rocket, App Engine Flex

F. Perl, CGI, GCE

ANSWER22:

A

Notes/References22:

The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too. 

Reference: Building a Python 3.7 App on App Engine

Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?

A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.

C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER23:

C

Notes/References23:

Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

References: Cloud Solutions Architecture Reference

Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?

A. Pause all Cloud Functions via the UI and unpause them when work starts back up.

B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.

C. Delete all Cloud Functions and recreate them when work starts back up.

D. Convert all Cloud Functions to run as App Engine Standard applications during the break.

E. None of these options is a good suggestion.

ANSWER24:

E

Notes/References24:

Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.

Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Cloud Filestore

D. Persistent Disk

ANSWER25:

C

Notes/References25:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question. 

Reference: Cloud Filestore | Google Cloud

Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?

A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER26:

A

Notes/References26:

The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn’t also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

Reference: Cloud Solutions Architecture Reference

Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?

A. Cloud Storage

B. Logstash

C. Cloud Monitoring

D. Cloud Logging

E. BigQuery

F. BigTable

ANSWER27:

D

Notes/References27:

Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on. 

Reference: Cloud Logging | Google Cloud

Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?

A. Compute Engine

B. Cloud Filestore

C. Cloud Storage

D. Persistent Disk

E. Container Registry

F. Cloud Source Repositories

G. Cloud Build

H. Nearline

ANSWER28:

C

Notes/References28:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer. 

Reference: The App Engine Standard Environment Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?

A. Use a global Cloud Spanner instance.

B. Change the storage class to multi-regional.

C. Use Google’s Cloud CDN.

D. Migrate the data to a new regional GCS bucket.

E. Do nothing.

F. Use global BigTable storage.

ANSWER29:

E

Notes/References29:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

B. Use Google’s Cloud CDN.

C. Use global BigTable storage.

D. Do nothing.

E. Migrate the data to a new multi-regional GCS bucket.

F. Use a global Cloud Spanner instance.

ANSWER30:

E

Notes/References30:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?

A. Six

B. One

C. Three

D. Four

E. Two

F. Nine

ANSWER31:

A

Notes/References31:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Persistent Disk

D. Nearline

E. Cloud Source Repositories

F. Cloud Build

G. Cloud Filestore

H. Compute Engine

ANSWER32:

F

Notes/References32:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?

A. Two

B. One

C. Three

D. Nine

E. Four

F. Six

ANSWER33:

D

Notes/References33:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?

A. Cloud Filestore

B. Coldline

C. Nearline

D. Persistent Disk

E. Cloud Source Repositories

F. Cloud Storage

G. Container Registry

ANSWER34:

B

Notes/References34:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline. 

Reference: Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Filestore

C. Cloud Source Repositories

D. Persistent Disk

E. Cloud Storage

F. Code Build

G. Nearline

ANSWER35:

A

Notes/References35:

Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?

A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.

B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.

C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.

D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.

E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.

ANSWER36:

C

Notes/References36:

You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor’s own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you’ll need to grant securityAdmin, not networkViewer. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions Understanding roles | Cloud IAM Documentation | Google Cloud

Question 37: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?

A. One

B. Six

C. Four

D. Three

E. Nine

ANSWER37:

F

Notes/References37:

A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 38: You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?

A. Binary Authorization

B. Cloud IAM

C. Security Key Enforcement

D. Cloud SCC

E. Cloud KMS

ANSWER38:

A

Notes/References38:

Cloud KMS is Google’s product for managing encryption keys. Security Key Enforcement is about making sure that people’s accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments. 

Reference: Cloud Key Management Service | Google Cloud Cloud IAM | Google Cloud Cloud Data Loss Prevention | Google Cloud Security Command Center | Google Cloud Binary Authorization | Google Cloud Security Key Enforcement – 2FA

Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?

A. PCI

B. PII

C. SOX

D. GDPR

E. HIPAA

ANSWER39:

B and D

Notes/References39:

There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”. 

Reference: Sarbanes–Oxley Act – Wikipedia Payment Card Industry Data Security Standard – Wikipedia Personal data – Wikipedia Personal data – Wikipedia

Question 40: Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?

A. Their customers located in the EU may require them to delete their user data and provide evidence of such.

B. They will also need to pass a SOX audit.

C. They handle money-linked information.

D. Their system deals with medical information.

ANSWER40:

D

Notes/References40:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 41: Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?

A. They handle money-linked information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. Their system deals with medical information.

D. They will also need to pass a SOX audit.

ANSWER42:

A

Notes/References42:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much. 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 43: Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?

A. Their system deals with medical information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. They will also need to pass a SOX audit.

D. They handle money-linked information.

ANSWER43:

B

Notes/References43:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 44: For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?

A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud

B. Lift and shift all servers at one time

C. Lift and shift one application at a time

D. Lift and shift one server at a time

E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system

F. Enact their disaster recovery plan and fail over

ANSWER44:

F

Notes/References44:

The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time. 

Reference: Strangler pattern – Cloud Design Patterns | Microsoft Docs StranglerFigApplication Monolith to Microservices Using the Strangler Pattern – DZone Microservices Understanding Lift and Shift and If It’s Right For You

Question 45: Which of the following commands is most likely to appear in an environment setup script?

A. gsutil mb -l asia gs://${project_id}-logs

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute instances create –zone–machine-type=f1-micro newvm

D. gcloud compute ssh ${instance_id}

E. gsutil cp -r gs://${project_id}-setup ./install

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

ANSWER45:

A

Notes/References45:

The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You’re not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances. 

Reference: mb – Make buckets | Cloud Storage | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 46: Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?

A. /dealerLocations/get

B. /dealerLocations

C. /dealerLocations/list

D. Source and destination

E. /getDealerLocations

ANSWER46:

B

Notes/References46:

It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it’s important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint. 

Reference: RESTful API Design — Step By Step Guide – By

Question 47: Which of the following commands is most likely to appear in an instance shutdown script?

A. gsutil cp -r gs://${project_id}-setup ./install

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute ssh ${instance_id}

D. gsutil mb -l asia gs://${project_id}-logs

E. gcloud compute instances delete ${instance_id}

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

G. gcloud compute instances create –zone–machine-type=f1-micro newvm

ANSWER47:

F

Notes/References47:

The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you’d want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)

Reference:  Running startup scripts | Compute Engine Documentation | Google Cloud Running shutdown scripts | Compute Engine Documentation | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 48: It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?

A. Escalate the decision to the business manager responsible for the SLA

B. Take the system offline

C. Revert the system to the state it was in on Friday morning

D. Investigate the cause of the issue

ANSWER48:

B

Notes/References48:

The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn’t make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we’d better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO. 

Reference: Disaster recovery – Wikipedia

Question 49: Which of the following are not processes or practices that you would associate with DevOps?

A. Raven-test the candidate

B. Obfuscate the code

C. Only one of the other options is made up

D. Run the code in your cardinal environment

E. Do a canary deploy

ANSWER49:

A and D

Notes/References49:

Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you’re making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization’s source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn’t a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.

Reference: Intro to deployment strategies: blue-green, canary, and more – DEV Community ‍‍

Question 50: Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?

A. Object lifecycle management

B. BigQuery Slots

C. Committed use discounts

D. Sustained use discounts

E. Managed instance group autoscaling

F. Pub/Sub topic centralization

ANSWER50:

B and C

Notes/References50:

Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn’t really a thing). Sustained use discounts can reduce costs, but that’s not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE’s Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don’t use them; you’ve already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won’t pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex. 

Reference: CapEx vs OpEx: Capital Expenses and Operating Expenses Explained – BMC Blogs Sustained use discounts | Compute Engine Documentation | Google Cloud Committed use discounts | Compute Engine Documentation | Google Cloud Slots | BigQuery | Google Cloud Autoscaling groups of instances | Compute Engine Documentation Object Lifecycle Management | Cloud Storage | Google Cloud

Question 51: In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?

A. The scrum master is the one who decides

B. The lead architect should get the final say

C. The product owner should get the final say

D. You should put it to a vote of key stakeholders

E. You should put it to a vote of all stakeholders

ANSWER51:

C

Notes/References51:

In Scrum, it is the Product Owner’s role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven’t ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context. 

Reference: Scrum Guide | Scrum Guides

Question 52: Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?

A. Split testing

B. Red-Black

C. A/B

D. Canary

E. Rolling

F. Blue-Green

G. Flex downtime

ANSWER52:

D and E

Notes/References52:

A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application. 

Reference: BlueGreenDeployment design patterns – What’s the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow design patterns – What’s the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow What is rolling deployment? – Definition from WhatIs.com A/B testing – Wikipedia

Question 53: You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?

A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on

B. Hiring and retaining 10X developers is critical to project success

C. A key goal of a proper post-mortem is to identify what processes need to be changed

D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project

E. A key goal of a proper post-mortem is to determine who needs additional training

ANSWER53:

A and C

Notes/References53:

There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future. 

Reference: 403 Forbidden 403 Forbidden Google – Site Reliability Engineering The Cone of Uncertainty

Question 54: Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. The next month’s SLO will be reduced.

C. Your client(s) will have to pay you extra.

D. You will have to pay your client(s).

E. There is no impact on payments.

F. There is not enough information to make a determination.

ANSWER54:

D

Notes/References54:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What’s the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 55: Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. There is no impact on payments.

C. There is not enough information to make a determination.

D. Your client(s) will have to pay you extra.

E. The next month’s SLO will be reduced.

F. You will have to pay your client(s).

ANSWER55:

B

Notes/References55:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What’s the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 56: For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?

A. Configure the autoscaling thresholds to follow changing load

B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand

C. Run cron jobs on their application servers to scale down at night and up in the morning

D. Use Cloud Load Balancing to balance the traffic highs and lows

D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning

E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace

ANSWER56:

A

Notes/References56:

The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over. 

Reference: Load Balancing | Google Cloud Google Cloud Platform Marketplace Solutions Cloud Functions | Google Cloud Cloud Scheduler | Google Cloud

Google Cloud Latest News, Questions and Answers online:

Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.

With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.

With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem! Cloud Run is made for Devops engineers, App Engine is made for developers. Read more here…

Cloud Run VS Cloud Functions: What to consider?

The best choice depends on what you want to optimize, your use-cases and your specific needs.

If your objective is the lowest latency, choose Cloud Run.

Indeed, Cloud Run use always 1 vCPU (at least 2.4Ghz) and you can choose the memory size from 128Mb to 2Gb.

With Cloud Functions, if you want the best processing performance (2.4Ghz of CPU), you have to pay 2Gb of memory. If your memory footprint is low, a Cloud Functions with 2Gb of memory is overkill and cost expensive for nothing.

Cutting cost is not always the best strategy for customer satisfaction, but business reality may require it. Anyway, it highly depends of your use-case

Both Cloud Run and Cloud Function round up to the nearest 100ms. As you could play with the GSheet, the Cloud Functions are cheaper when the processing time of 1 request is below the first 100ms. Indeed, you can slow the Cloud Functions vCPU, with has for consequence to increase the duration of the processing but while staying under 100ms if you tune it well. Thus less Ghz/s are used and thereby you pay less.

the cost comparison between Cloud Functions and Cloud Run goes further than simply comparing a pricing list. Moreover, on your projects, you often will have to use the 2 solutions for taking advantage of their strengths and capabilities.

My first choice for development is Cloud Run. Its portability, its testability, its openess on the libraries, the languages and the binaries confer it too much advantages for, at least, a similar pricing, and often with a real advantage in cost but also in performance, in particular for concurrent requests. Even if you need the same level of isolation of Cloud functions (1 instance per request), simply set the concurrent param to 1!

In addition, the GA of Cloud Run is applied on all containers, whatever the languages and the binaries used. Read more here…

What does the launch of Google’s App Maker mean for professional app developers?

Should I go with AWS Elastic Beanstalk or Google App Engine (Managed VMs) for deploying my Parse-Server backend?

Why can a company such as Google sell me a cloud gaming service where I can “rent” GPU power over miles of internet, but when I seek out information on how to create a version of this everyone says that it is not possible or has too much latency?

AWS wins hearts of developers while Azure those of C-levels. Google is a black horse with special expertise like K8s and ML. The cloud world is evolving. Who is the winner in the next 5 years?

What is GCP (Google Cloud Platform) and how does it work?

What is the maximum amount of storage that you could have in your Google drive?

How do I deploy Spring Boot application (Web MVC) on Google App Engine(GAE) or HEROKU using Eclipse IDE?

What are some downsides of building softwares on top of Google App Engine?

Why is Google losing the cloud computing race?

How did new products like Google Drive, Microsoft SkyDrive, Yandex.Disk and other cloud storage solutions affect Dropbox’s growth and revenues?

What is the capacity of Google servers?

What is the Hybrid Cloud platform?

What is the difference between Docker and Google App engines?

How do I get to cloud storage?

How does Google App Engine compare to Heroku?

What is equivalent of Google Cloud BigTable in Microsoft Azure?

How big is the storage capacity of Google organization and who comes second?

It seems strange that Google Cloud Platform offer “everything” except cloud search/inverted index?

Where are the files on Google Drive stored?

Is Google app engine similar to lambda?

Was Diane Greene a failure as the CEO of Google Cloud considering her replacement’s strategy and philosophy is the polar opposite?

How is Google Cloud for heavy real-time traffic? Is there any optimization needed for handling more than 100k RT?

When it comes to iCloud, why does Apple rely on Google Cloud instead of using their own data centers?

Google Cloud Storage : What bucket class for the best performance?: Multiregional buckets perform significantly better for cross-the-ocean fetches, however the details are a bit more nuanced than that. The performance is dominated by the latency of physical distance between the client and the cloud storage bucket.

  • If caching is on, and your access volume is high enough to take advantage of caching, there’s not a huge difference between the two offerings (that I can see with the tests). This shows off the power of Google’s Awesome CDN environment.
  • If caching is off, or the access volume is low enough that you can’t take advantage of caching, then the performance overhead is dominated directly by physics. You should be trying to get the assets as close to the clients as possible, while also considering cost, and the types of redundancy and consistency you’ll need for your data needs.

Conclusion:

GCP, or the Google Cloud Platform, is a cloud-computing platform that provides users with access to a variety of GCP services. The GCP Professional Architect Engineeer exam is designed to test a candidate’s ability to design, implement, and manage GCP solutions. The GCP questions cover a wide range of topics, from basic GCP concepts to advanced GCP features. To become a GCP Certified Professional, you must pass the GCP PE exam. Below are some basics GCP Questions to answer to get yourself familiarized with the Google Cloud Platform:

1) What is GCP?
2) What are the benefits of using GCP?
3) How can GCP help my business?
4) What are some of the features of GCP?
5) How is GCP different from other clouds?
6) Why should I use GCP?
7) What are some of GCP’s strengths?
8) How is GCP priced?
9) Is GCP easy to use?
10) Can I use GCP for my personal projects?
11) What services does GCP offer?
12) What can I do with GCP?
13) What languages does GCP support?
14) What platforms does GCP support?
15) Does GPC support hybrid deployments? 16) Does GPC support on-premises deployments?

17) Is there a free tier on GPC ?

18) How do I get started with usingG CP ?

Top- high paying certifications:

  1. Google Certified Professional Cloud Architect – $139,529
  2. PMP® – Project Management Professional – $135,798
  3. Certified ScrumMaster® – $135,441
  4. AWS Certified Solutions Architect – Associate – $132,840
  5. AWS Certified Developer – Associate – $130,369
  6. Microsoft Certified Solutions Expert (MCSE): Server Infrastructure – $121,288
  7. ITIL® Foundation – $120,566
  8. CISM – Certified Information Security Manager – $118,412
  9. CRISC – Certified in Risk and Information Systems Control – $117,395
  10. CISSP – Certified Information Systems Security Professional – $116,900
  11. CEH – Certified Ethical Hacker – $116,306
  12. Citrix Certified Associate – Virtualization (CCA-V) – $113,442
  13. CompTIA Security+ – $110,321
  14. CompTIA Network+ – $107,143
  15. Cisco Certified Networking Professional (CCNP) Routing and Switching – $106,957

According to the 2020 Global Knowledge report, the top-paying cloud certifications for the year are (drumroll, please):

1- Google Certified Professional Cloud Architect — $175,761

2- AWS Certified Solutions Architect – Associate — $149,446

3- AWS Certified Cloud Practitioner — $131,465

4- Microsoft Certified: Azure Fundamentals — $126,653

5- Microsoft Certified: Azure Administrator Associate — $125,993

Sources:

1- Google Cloud

2- Linux Academy

3- WhizLabs

4- GCP Space on Quora

5- Udemy

6- Acloud Guru

7. Question and Answers are sent to us by good people all over the world.

First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.

I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.

In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.

As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.

I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.

One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.

I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.

All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.

What are the corresponding Azure and Google Cloud services for each of the AWS services?

Azure Administrator AZ-104 Exam Questions and Answers Dumps

What are the corresponding Azure and Google Cloud services for each of the AWS services?

What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?

For a better experience, use the mobile app here.

AWS vs Azure vs Google
What are the corresponding  Azure and Google Cloud services for each of the AWS services?
AWS vs Azure vs Google Mobile App
Cloud Practitioner Exam Prep:  AWS vs Azure vs Google
Cloud Practitioner Exam Prep: AWS vs Azure vs Google

1

Category: Marketplace
Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions.
References:
[AWS]:AWS Marketplace
[Azure]:Azure Marketplace
[Google]:Google Cloud Marketplace
Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace
Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.

3

Category: AI and machine learning
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
References:
[AWS]:Alexa Skills Kit (enables a developer to build skills, also called conversational applications, on the Amazon Alexa artificial intelligence assistant.)
[Azure]:Microsoft Bot Framework (building enterprise-grade conversational AI experiences.)
[Google]:Google Assistant Actions ( developer platform that lets you create software to extend the functionality of the Google Assistant, Google’s virtual personal assistant,)


Custom AI Chatbot

Imagine a 24/7 virtual assistant that never sleeps, always ready to serve customers with instant, accurate responses.

Contact us here to book a demo and receive a personalized value proposition



GeoVision AI

We combine the power of GIS and AI to deliver instant, actionable intelligence for organizations that rely on real-time data gathering. Our unique solution leverages 🍇 GIS best practices and 🍉 Power Automate for GIS integration to collect field data—texts, photos, and geolocation—seamlessly. Then, through 🍊 Generative AI for image analysis, we deliver immediate insights and recommendations right to your team’s inbox and chat tools.

Contact us here to book a demo and receive a personalized value proposition


Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant
Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.

4

Category: AI and machine learning
Description:API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
References:
[AWS]:Amazon Lex (building conversational interfaces into any application using voice and text.)
[Azure]:Azure Speech Services(unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription)
[Google]:Google APi.ai, AI Hub (Hosted repo of plug-and-play AI component), AI building blocks(for developers to add sight, language, conversation, and structured data to their applications.), AI Platform(code-based data science development environment, lets ML developers and data scientists quickly take projects from ideation to deployment.), DialogFlow (Google-owned developer of human–computer interaction technologies based on natural language conversations. ), TensorFlow(Open Source Machine Learning platform)

Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow
Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).

5

Category: AI and machine learning
Description:Computer Vision: Extract information from images to categorize and process visual data.
References:
[AWS]:Amazon Rekognition (based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use.)
[Azure]:Cognitive Services(bring AI within reach of every developer—without requiring machine-learning expertise.)
[Google]:Google Vision (offers powerful pre-trained machine learning models through REST and RPC APIs.)
Tags: AmazonRekognition, #GoogleVision, #AzureSpeech
Differences: For now, only Google Cloud Vision supports batch processing. Videos are not natively supported by Google Cloud Vision or Amazon Rekognition. The Object Detection functionality of Google Cloud Vision and Amazon Rekognition is almost identical, both syntactically and semantically.
Differences:
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs.

6

Category: Big data and analytics: Data warehouse
Description:Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
References:
[AWS]:AWS Redshift (scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake.), Amazon Redshift Data Lake Export (Save query results in an open format),Amazon Redshift Federated Query(Run queries n line transactional data), Amazon Redshift RA3(Optimize costs with up to 3x better performance), AQUA: AQUA: Advanced Query Accelerator for Amazon Redshift (Power analytics with a new hardware-accelerated cache), UltraWarm for Amazon Elasticsearch Service(Store logs at ~1/10th the cost of existing storage tiers )
[Azure]:Azure Synapse formerly SQL Data Warehouse (limitless analytics service that brings together enterprise data warehousing and Big Data analytics.)
[Google]:BigQuery (RESTful web service that enables interactive analysis of massive datasets working in conjunction with Google Storage. )
Tags:#AWSRedshift, #GoogleBigQuery, #AzureSynapseAnalytics
Differences: Loading data, Managing resources (and hence pricing), Ecosystem. Ecosystem is where Redshift is clearly ahead of BigQuery. While BigQuery is an affordable, performant alternative to Redshift, they are considered to be more up and coming

7

Category: Big data and analytics: Data warehouse
Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization
References:
[AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch
[Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning
[Google]:Cloud DataProc, Machine Learning, Cloud Datalab
Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab
Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.

8

Category: Virtual servers
Description:Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Batch: Run large-scale parallel and high-performance computing applications efficiently in the cloud.
References:
[AWS]:Elastic Compute Cloud (EC2), Amazon Bracket(Explore and experiment with quantum computing), Amazon Ec2 M6g Instances (Achieve up to 40% better price performance), Amazon Ec2 Inf1 instancs (Deliver cost-effective ML inference), AWS Graviton2 Processors (Optimize price performance for cloud workloads), AWS Batch, AWS AutoScaling, VMware Cloud on AWS, AWS Local Zones (Run low latency applications at the edge), AWS Wavelength (Deliver ultra-low latency applications for 5G devices), AWS Nitro Enclaves (Further protect highly sensitive data), AWS Outposts (Run AWS infrastructure and services on-premises)
[Azure]:Azure Virtual Machines, Azure Batch, Virtual Machine Scale Sets, Azure VMware by CloudSimple
[Google]:Compute Engine, Preemptible Virtual Machines, Managed instance groups (MIGs), Google Cloud VMware Solution by CloudSimple
Tags: #AWSEC2, #AWSBatch, #AWSAutoscaling, #AzureVirtualMachine, #AzureBatch, #VirtualMachineScaleSets, #AzureVMWare, #ComputeEngine, #MIGS, #VMWare
Differences: There is very little to choose between the 3 providers when it comes to virtual servers. Amazon has some impressive high end kit, on the face of it this sound like it would make AWS a clear winner. However, if your only option is to choose the biggest box available you will need to make sure you have very deep pockets, and perhaps your money may be better spent re-architecting your apps for horizontal scale.Azure’s remains very strong in the PaaS space and now has a IaaS that can genuinely compete with AWS
Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.

9

Category: Containers and container orchestrators
Description: A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments.
References:
[AWS]:EC2 Container Service (ECS), Fargate(Run containers without anaging servers or clusters), EC2 Container Registry(managed AWS Docker registry service that is secure, scalable, and reliable.), Elastic Container Service for Kubernetes (EKS: runs the Kubernetes management infrastructure across multiple AWS Availability Zones), App Mesh( application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure)
[Azure]:Azure Container Instances, Azure Container Registry, Azure Kubernetes Service (AKS), Service Fabric Mesh
[Google]:Google Container Engine, Container Registry, Kubernetes Engine
Tags:#ECS, #Fargate, #EKS, #AppMesh, #ContainerEngine, #ContainerRegistry, #AKS
Differences: Google Container Engine, AWS Container Services, and Azure Container Instances can be used to run docker containers. Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

10

Category: Serverless
Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
References:
[AWS]:AWS Lambda
[Azure]:Azure Functions
[Google]:Google Cloud Functions
Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions
Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.

11

Category: Relational databases
Description: Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
References:
[AWS]:AWS RDS(MySQL and PostgreSQL-compatible relational database built for the cloud,), Aurora(MySQL and PostgreSQL-compatible relational database built for the cloud)
[Azure]:SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL
[Google]:Cloud SQL
Tags: #AWSRDS, #AWSAUrora, #AzureSQlDatabase, #AzureDatabaseforMySQL, #GoogleCloudSQL
Differences: All three providers boast impressive relational database offering. RDS supports an impressive range of managed relational stores while Azure SQL Database is probably the most advanced managed relational database available today. Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.

12

Category: NoSQL, Document Databases
Description:A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
References:
[AWS]:DynamoDB (key-value and document database that delivers single-digit millisecond performance at any scale.), SimpleDB ( a simple web services interface to create and store multiple data sets, query your data easily, and return the results.), Managed Cassandra Services(MCS)
[Azure]:Table Storage, DocumentDB, Azure Cosmos DB
[Google]:Cloud Datastore (handles sharding and replication in order to provide you with a highly available and consistent database. )
Tags:#AWSDynamoDB, #SimpleDB, #TableSTorage, #DocumentDB, AzureCosmosDB, #GoogleCloudDataStore
Differences:DynamoDB and Cloud Datastore are based on the document store database model and are therefore similar in nature to open-source solutions MongoDB and CouchDB. In other words, each database is fundamentally a key-value store. With more workloads moving to the cloud the need for NoSQL databases will become ever more important, and again all providers have a good range of options to satisfy most performance/cost requirements. Of all the NoSQL products on offer it’s hard not to be impressed by DocumentDB; Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.

13

Category:Caching
Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
References:
[AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.)
[Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.)
[Google]:Memcache (In-memory key-value store, originally intended for caching)
Tags:#Redis, #Memcached
<Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.

14

Category: Security, identity, and access
Description:Authentication and authorization: Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
References:
[AWS]:Identity and Access Management (IAM), AWS Organizations, Multi-Factor Authentication, AWS Directory Service, Cognito(provides solutions to control access to backend resources from your app), Amazon Detective (Investigate potential security issues), AWS IAM Access Analyzer(Easily analyze resource accessibility)
[Azure]:Azure Active Directory, Azure Subscription Management + Azure RBAC, Multi-Factor Authentication, Azure Active Directory Domain Services, Azure Active Directory B2C, Azure Policy, Management Groups
[Google]:Cloud Identity, Identity Platform, Cloud IAM, Policy Intelligence, Cloud Resource Manager, Cloud Identity-Aware Proxy, Context-aware accessManaged Service for Microsoft Active Directory, Security key enforcement, Titan Security Key
Tags: #IAM, #AWSIAM, #AzureIAM, #GoogleIAM, #Multi-factorAuthentication
Differences: One unique thing about AWS IAM is that accounts created in the organization (not through federation) can only be used within that organization. This contrasts with Google and Microsoft. On the good side, every organization is self-contained. On the bad side, users can end up with multiple sets of credentials they need to manage to access different organizations. The second unique element is that every user can have a non-interactive account by creating and using access keys, an interactive account by enabling console access, or both. (Side note: To use the CLI, you need to have access keys generated.)

15

Category: Object Storage and Content delivery
Description:Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export(used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport.), Snowball( petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud), CloudFront( content delivery network (CDN) is massively scaled and globally distributed), Elastic Block Store (EBS: high performance block storage service), Elastic File System(shared, elastic file storage system that grows and shrinks as you add and remove files.), S3 Infrequent Access (IA: is for data that is accessed less frequently, but requires rapid access when needed. ), S3 Glacier( long-term storage of data that is infrequently accessed and for which retrieval latency times of 3 to 5 hours are acceptable.), AWS Backup( makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.), Storage Gateway(hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage), AWS Import/Export Disk(accelerates moving large amounts of data into and out of AWS using portable storage devices for transport)
[Azure]:
Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:
Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:
Source: All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.

16

Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT (Internet of Things), AWS Greengrass, Kinesis Firehose, Kinesis Streams, AWS IoT Things Graph
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.

17

Category:Web Applications
Description:Managed hosting platform providing easy to use services for deploying and scaling web applications and services. API Gateway is a a turnkey solution for publishing APIs to external and internal consumers. Cloudfront is a global content delivery network that delivers audio, video, applications, images, and other files.
References:
[AWS]:Elastic Beanstalk (for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS), AWS Wavelength (for delivering ultra-low latency applications for 5G), API Gateway (makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.), CloudFront (web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations.),Global Accelerator ( improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.)AWS AppSync (simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources: GraphQL service with real-time data synchronization and offline programming features. )
[Azure]:App Service, API Management, Azure Content Delivery Network, Azure Content Delivery Network
[Google]:App Engine, Cloud API, Cloud Enpoint, APIGee
Tags: #AWSElasticBeanstalk, #AzureAppService, #GoogleAppEngine, #CloudEnpoint, #CloudFront, #APIgee
Differences: With AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using Elastic Beanstalk’s management capabilities. AWS Elastic Beanstalk integrates with more apps than Google App Engines (Datadog, Jenkins, Docker, Slack, Github, Eclipse, etc..). Google App Engine has more features than AWS Elastic BEanstalk (App Identity, Java runtime, Datastore, Blobstore, Images, Go Runtime, etc..). Developers describe Amazon API Gateway as “Create, publish, maintain, monitor, and secure APIs at any scale”. Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. On the other hand, Google Cloud Endpoints is detailed as “Develop, deploy and manage APIs on any Google Cloud backend”. An NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API development and provides insight with Google Cloud Monitoring, Cloud Trace, Google Cloud Logging and Cloud Trace.

18

Category:Encryption
Description:Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
References:
[AWS]:Key Management Service AWS KMS, CloudHSM
[Azure]:Key Vault
[Google]:Encryption By Default at Rest, Cloud KMS
Tags:#AWSKMS, #Encryption, #CloudHSM, #EncryptionAtRest, #CloudKMS
Differences: AWS KMS, is an ideal solution for organizations that want to manage encryption keys in conjunction with other AWS services. In contrast to AWS CloudHSM, AWS KMS provides a complete set of tools to manage encryption keys, develop applications and integrate with other AWS services. Google and Azure offer 4096 RSA. AWS and Google offer 256 bit AES. With AWs, you can bring your own key

20

Category:Object Storage and Content delivery
Description: Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export Snowball, CloudFront, Elastic Block Store (EBS), Elastic File System, S3 Infrequent Access (IA), S3 Glacier, AWS Backup, Storage Gateway, AWS Import/Export Disk, Amazon S3 Access Points(Easily manage access for shared data)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.

21

Category: Backend process logic
Description: Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
References:
[AWS]:AWS Step Functions ( lets you build visual workflows that enable fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code.)
[Azure]:Logic Apps (cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.)
[Google]:Dataflow ( fully managed service for executing Apache Beam pipelines within the Google Cloud Platform ecosystem.)
Tags:#AWSStepFunctions, #LogicApps, #Dataflow
Differences: AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. AWS Step Functions belongs to \”Cloud Task Management\” category of the tech stack, while Google Cloud Dataflow can be primarily classified under \”Real-time Data Processing\”. According to the StackShare community, Google Cloud Dataflow has a broader approval, being mentioned in 32 company stacks & 8 developers stacks; compared to AWS Step Functions, which is listed in 19 company stacks and 7 developer stacks.

22

Category: Enterprise application services
Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
References:
[AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index)
[Azure]:Office 365
[Google]:G Suite
Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite
Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC

23

Category: Networking
Description: Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
References:
[AWS]:Virtual Private Cloud (VPC), Cloud virtual networking, Subnets, Elastic Network Interface (ENI), Route Tables, Network ACL, Secutity Groups, Internet Gateway, NAT Gateway, AWS VPN Gateway, AWS Route 53, AWS Direct Connect, AWS Network Load Balancer, VPN CloudHub, AWS Local Zones, AWS Transit Gateway network manager (Centrally manage global networks)
[Azure]:Virtual Network(provide services for building networks within Azure.),Subnets (network resources can be grouped by subnet for organisation and security.), Network Interface (Each virtual machine can be assigned one or more network interfaces (NICs)), Network Security Groups (NSG: contains a set of prioritised ACL rules that explicitly grant or deny access), Azure VPN Gateway ( allows connectivity to on-premise networks), Azure DNS, Traffic Manager (DNS based traffic routing solution.), ExpressRoute (provides connections up to 10 Gbps to Azure services over a dedicated fibre connection), Azure Load Balancer, Network Peering, Azure Stack (Azure Stack allows organisations to use Azure services running in private data centers.), Azure Load Balancer , Azure Log Analytics, Azure DNS,
[Google]:Cloud Virtual Network, Subnets, Network Interface, Protocol fowarding, Cloud VPN, Cloud DNS, Virtual Private Network, Cloud Interconnect, CDN interconnect, Cloud DNS, Stackdriver, Google Cloud Load Balancing,
Tags:#VPC, #Subnets, #ACL, #VPNGateway, #CloudVPN, #NetworkInterface, #ENI, #RouteTables, #NSG, #NetworkACL, #InternetGateway, #NatGateway, #ExpressRoute, #CloudInterConnect, #StackDriver
Differences: Subnets group related resources, however, unlike AWS and Azure, Google do not constrain the private IP address ranges of subnets to the address space of the parent network. Like Azure, Google has a built in internet gateway that can be specified from routing rules.

24

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Category: Management
Description: A unified management console that simplifies building, deploying, and operating your cloud resources.
References:
[AWS]: AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources)
[Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health
[Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver
Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter
Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.

25

Category: DevOps and application monitoring
Description: Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments; Cloud services for collaborating on code development; Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services; Fully managed build service that supports continuous integration and deployment.
References:
[AWS]:AWS CodePipeline(orchestrates workflow for continuous integration, continuous delivery, and continuous deployment), AWS CloudWatch (monitor your AWS resources and the applications you run on AWS in real time. ), AWS X-Ray (application performance management service that enables a developer to analyze and debug applications in aws), AWS CodeDeploy (automates code deployments to Elastic Compute Cloud (EC2) and on-premises servers. ), AWS CodeCommit ( source code storage and version-control service), AWS Developer Tools, AWS CodeBuild (continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. ), AWS Command Line Interface (unified tool to manage your AWS services), AWS OpsWorks (Chef-based), AWS CloudFormation ( provides a common language for you to describe and provision all the infrastructure resources in your cloud environment.), Amazon CodeGuru (for automated code reviews and application performance recommendations)
[Azure]:Azure Monitor, Azure DevOps, Azure Developer Tools, Azure CLI Azure PowerShell, Azure Automation, Azure Resource Manager , VM extensions , Azure Automation
[Google]:DevOps Solutions (Infrastructure as code, Configuration management, Secrets management, Serverless computing, Continuous delivery, Continuous integration , Stackdriver (combines metrics, logs, and metadata from all of your cloud accounts and projects into a single comprehensive view of your environment)
Tags: #CloudWatch, #StackDriver, #AzureMonitor, #AWSXray, #AWSCodeDeploy, #AzureDevOps, #GoogleDevopsSolutions
Differences: CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. Azure DevOps provides unlimited private Git hosting, cloud build for continuous integration, agile planning, and release management for continuous delivery to the cloud and on-premises. Includes broad IDE support.

SageMakerAzure Machine Learning Studio

A collaborative, drag-and-drop tool to build, test, and deploy predictive analytics solutions on your data.

Alexa Skills KitMicrosoft Bot Framework

Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.

Amazon LexSpeech Services

API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.

Amazon LexLanguage Understanding (LUIS)

Allows your applications to understand user commands contextually.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Amazon Polly, Amazon Transcribe | Azure Speech Services

Enables both Speech to Text, and Text into Speech capabilities.
The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs.
Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Amazon RekognitionCognitive Services

Computer Vision: Extract information from images to categorize and process visual data.
Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.

Face: Detect, identy, and analyze faces in photos.

Emotions: Recognize emotions in images.

Alexa Skill SetAzure Virtual Assistant

The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.

Big data and analytics

Data warehouse

AWS RedshiftSQL Data Warehouse

Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.

Big data processing EMR | Azure Databricks
Apache Spark-based analytics platform.

EMR HDInsight

Managed Hadoop service. Deploy and manage Hadoop clusters in Azure.

Data orchestration / ETL

AWS Data Pipeline, AWS Glue | Data Factory

Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.

AWS GlueData Catalog

A fully managed service that serves as a system of registration and system of discovery for enterprise data sources

Analytics and visualization

AWS Kinesis Analytics | Stream Analytics

Data Lake Analytics | Data Lake Store

Storage and analysis platforms that create insights from large quantities of data, or data that originates from many sources.

QuickSightPower BI

Business intelligence tools that build visualizations, perform ad hoc analysis, and develop business insights from data.

CloudSearchAzure Search

Delivers full-text search and related search analytics and capabilities.

Amazon AthenaAzure Data Lake Analytics

Provides a serverless interactive query service that uses standard SQL for analyzing databases.

Compute

Virtual servers

Elastic Compute Cloud (EC2)Azure Virtual Machines

Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.

AWS BatchAzure Batch

Run large-scale parallel and high-performance computing applications efficiently in the cloud.

AWS Auto ScalingVirtual Machine Scale Sets

Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.

VMware Cloud on AWSAzure VMware by CloudSimple

Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.

Containers and container orchestrators

EC2 Container Service (ECS), FargateAzure Container Instances

Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.

EC2 Container RegistryAzure Container Registry

Allows customers to store Docker formatted images. Used to create all types of container deployments on Azure.

Elastic Container Service for Kubernetes (EKS)Azure Kubernetes Service (AKS)

Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.

App MeshService Fabric Mesh

Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking.
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.

Serverless

AWS Lambda | Azure Functions

Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code

Database

Relational database

AWS RDS | SQL Database Azure Database for MySQL Azure Database for PostgreSQL

Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.

NoSQL / Document

DynamoDB and SimpleDBAzure Cosmos DB

A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.

Caching

AWS ElastiCache | Azure Cache for Redis

An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.

Database migration

AWS Database Migration ServiceAzure Database Migration Service

Migration of database schema and data from one database format to a specific database technology in the cloud.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

DevOps and application monitoring

AWS CloudWatch, AWS X-Ray | Azure Monitor

Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.
AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.

AWS CodeDeploy, AWS CodeCommit, AWS CodePipeline | Azure DevOps

A cloud service for collaborating on code development.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.

AWS Developer ToolsAzure Developer Tools

Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services.
The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.

AWS CodeBuild | Azure DevOps

Fully managed build service that supports continuous integration and deployment.

AWS Command Line Interface | Azure CLI Azure PowerShell

Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

AWS OpsWorks (Chef-based)Azure Automation

Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources.
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.

AWS CloudFormation | Azure Resource Manager , VM extensions , Azure Automation

Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks.
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

Networking

Area

Cloud virtual networking, Virtual Private Cloud (VPC) | Virtual Network

Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.

Cross-premises connectivity

AWS VPN Gateway | Azure VPN Gateway

Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).

DNS management

AWS Route 53 | Azure DNS

Manage your DNS records using the same credentials and billing and support contract as your other Azure services

Route 53 | Traffic Manager

A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.

Dedicated network

AWS Direct Connect | ExpressRoute

Establishes a dedicated, private network connection from a location to the cloud provider (not over the Internet).

Load balancing

AWS Network Load Balancer | Azure Load Balancer

Azure Load Balancer load-balances traffic at layer 4 (TCP or UDP).

Application Load Balancer | Application Gateway

Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.

Internet of things (IoT)

AWS IoT | Azure IoT Hub

A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale.

AWS Greengrass | Azure IoT Edge

Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.

Kinesis Firehose, Kinesis Streams | Event Hubs

Services that allow the mass ingestion of small data inputs, typically from devices and sensors, to process and route the data.

AWS IoT Things Graph | Azure Digital Twins

Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.

Management

Trusted Advisor | Azure Advisor

Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.

AWS Usage and Billing Report | Azure Billing API

Services to help generate, monitor, forecast, and share billing data for resource usage by time, organization, or product resources.

AWS Management Console | Azure portal

A unified management console that simplifies building, deploying, and operating your cloud resources.

AWS Application Discovery Service | Azure Migrate

Assesses on-premises workloads for migration to Azure, performs performance-based sizing, and provides cost estimations.

Amazon EC2 Systems Manager | Azure Monitor

Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.

AWS Personal Health Dashboard | Azure Resource Health

Provides detailed information about the health of resources as well as recommended actions for maintaining resource health.

Security, identity, and access

Authentication and authorization

Identity and Access Management (IAM) | Azure Active Directory

Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.

Identity and Access Management (IAM) | Azure Role Based Access Control

Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

AWS Organizations | Azure Subscription Management + Azure RBAC

Security policy and role management for working with multiple accounts.

Multi-Factor Authentication | Multi-Factor Authentication

Safeguard access to data and applications while meeting user demand for a simple sign-in process.

AWS Directory Service | Azure Active Directory Domain Services

Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.

Cognito | Azure Active Directory B2C

A highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities.

AWS Organizations | Azure Policy

Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.

AWS Organizations | Management Groups

Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.

Encryption

Server-side encryption with Amazon S3 Key Management Service | Azure Storage Service Encryption

Helps you protect and safeguard your data and meet your organizational security and compliance commitments.

Key Management Service AWS KMS, CloudHSM | Key Vault

Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).

Firewall

Web Application Firewall | Application Gateway – Web Application Firewall

A firewall that protects web applications from common web exploits.

Web Application Firewall | Azure Firewall

Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.

Security

Inspector | Security Center

An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.

Certificate Manager | App Service Certificates available on the Portal

Service that allows customers to create, manage, and consume certificates seamlessly in the cloud.

GuardDuty | Azure Advanced Threat Protection

Detect and investigate advanced attacks on-premises and in the cloud.

AWS Artifact | Service Trust Portal

Provides access to audit reports, compliance guides, and trust documents from across cloud services.

AWS Shield | Azure DDos Protection Service

Provides cloud services with protection from distributed denial of services (DDoS) attacks.

Storage

Object storage

Simple Storage Services (S3) | Azure Blob storage

Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.

Virtual server disks

Elastic Block Store (EBS) | Azure managed disks

SSD storage optimized for I/O intensive read/write operations. For use as high-performance Azure virtual machine storage.

Shared files

Elastic File System | Azure Files

Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.

Archiving and backup

S3 Infrequent Access (IA) | Azure Storage cool tier

Cool storage is a lower-cost tier for storing data that is infrequently accessed and long-lived.

S3 Glacier | Azure Storage archive access tier

Archive storage has the lowest storage cost and higher data retrieval costs compared to hot and cool storage.

AWS Backup | Azure Backup

Back up and recover files and folders from the cloud, and provide offsite protection against data loss.

Hybrid storage

Storage Gateway | StorSimple

Integrates on-premises IT environments with cloud storage. Automates data management and storage, plus supports disaster recovery.

Bulk data transfer

AWS Import/Export Disk | Import/Export

A data transport solution that uses secure disks and appliances to transfer large amounts of data. Also offers data protection during transit.

AWS Import/Export Snowball, Snowball Edge, Snowmobile | Azure Data Box

Petabyte- to exabyte-scale data transport solution that uses secure data storage devices to transfer large amounts of data to and from Azure.

Web applications

Elastic Beanstalk | App Service

Managed hosting platform providing easy to use services for deploying and scaling web applications and services.

API Gateway | API Management

A turnkey solution for publishing APIs to external and internal consumers.

CloudFront | Azure Content Delivery Network

A global content delivery network that delivers audio, video, applications, images, and other files.

Global Accelerator | Azure Front Door

Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.

Miscellaneous

Backend process logic

AWS Step Functions | Logic Apps

Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.

Enterprise application services

Amazon WorkMail, Amazon WorkDocs | Office 365

Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.

Gaming

GameLift, GameSparks | PlayFab

Managed services for hosting dedicated game servers.

Media transcoding

Elastic Transcoder | Media Services

Services that offer broadcast-quality video streaming services, including various transcoding technologies.

Workflow

Simple Workflow Service (SWF) | Logic Apps

Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.

Hybrid

Outposts | Azure Stack

Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.

How does a business decide between Microsoft Azure or AWS?

Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).

Some of the main things it comes down to is compute options, pricing, and purchasing options.

Here’s a brief comparison of the compute option features across cloud providers:

Here’s an example of a few instances’ costs (all are Linux OS):

Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.

Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.

Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.

With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.

You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post

Source: AWS to Azure services comparison – Azure Architecture