Google Workspace – Docs – Drive – Sheets – Slides – How To

Google Workspace - Docs - Drive - Sheets - Slides - How To

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Google Workspace – Docs – Drive – Sheets – Slides – Forms – How To

Top 10 Google Workspace Tips and Tricks

Google Workspace - Docs - Drive - Sheets - Slides - How To
Google Workspace – Docs – Drive – Sheets – Slides – How To
  1. Use keyboard shortcuts: Google Workspace has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action in Google Docs, or “Ctrl + Shift + V” to paste text without formatting.
  2. Collaborate in real-time: With Google Workspace, you can work on documents and spreadsheets with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
  3. Create and edit documents offline: With the Google Docs offline extension, you can create and edit documents even when you don’t have an internet connection. Once you’re back online, your changes will be automatically saved.
  4. Use Google Keep for notes and to-do lists: Google Keep is a simple note-taking app that integrates seamlessly with Google Workspace. You can use it to take notes, create to-do lists, and set reminders.
  5. Use the Explore feature in Google Docs: The Explore feature in Google Docs can help you research and write documents more quickly by suggesting relevant information, images, and citations.
  6. Automate tasks with Google Scripts: Google Scripts is a powerful scripting tool that you can use to automate tasks in Google Workspace. For example, you can use a script to automatically send an email when a new form is submitted or to create a calendar event from a Google Sheets spreadsheet.
  7. Use Google Forms for surveys and quizzes: Google Forms is a great tool for creating surveys, quizzes, and other forms. You can use it to collect information from people and analyze the results in Google Sheets.
  8. Take advantage of the Google Workspace Marketplace: The Google Workspace Marketplace is a collection of apps and add-ons that can help you customize and enhance your Google Workspace experience. You can find apps for a wide range of tasks, such as creating diagrams, signing documents electronically, and more.
  9. Use Google Slides for presentations: Google Slides is an online presentation tool that can be used to create professional-looking slideshows. You can collaborate with others in real-time, add animations and transitions, and even insert videos.
  10. Use Google Drive for file storage and sharing: Google Drive is the main storage service for all your files, including documents, images, videos, and more. You can share files and folders with others, collaborate in real-time, and access your files from anywhere.

These are some of the most useful tips and tricks for getting the most out of Google Workspace. The apps are constantly updated and many new features are added regularly.

Google Workspace Business Starter (20% Discount)
Promotion code for the Americas   |  Expires 07/2023
M9HNXHX3WC9H7YE
Google Workspace Business Standard (20% Discount)
Promotion code for the Americas   |  Expires 07/2023
96DRHDRA9J7GTN6

Top 10 Google Drive Tips and Tricks

  1. Use the “Quick Access” feature: Google Drive’s Quick Access feature uses machine learning to predict which files you might need next, and it surfaces them at the top of your Google Drive for easy access.
  2. Take advantage of the offline feature: With the Google Drive Offline extension, you can access and edit your files even when you don’t have an internet connection.
  3. Create shortcuts to frequently used files: You can create a shortcut to a file or folder by right-clicking on it and selecting “Add to My Drive.” This way, you can quickly access it from your Google Drive home screen.
  4. Use the “Take a Snapshot” feature: Google Drive has a built-in “Take a Snapshot” feature that allows you to take a screenshot of any webpage and save it directly to your Google Drive.
  5. Use the “Suggested Sharing” feature: Google Drive’s “Suggested Sharing” feature uses machine learning to predict which people you might want to share a file with, and it automatically suggests their email addresses to you.
  6. Search for files using specific keywords: You can use advanced search operators to search for files that contain specific keywords or were created by a certain person.
  7. Use the “File Stream” feature: With the Google File Stream feature, you can access all of your Google Drive files directly from your computer’s file explorer, without having to download them first.
  8. Use the “Add-ons” feature: You can use Google Drive’s Add-ons feature to add extra functionality to your Google Drive, such as the ability to sign PDFs, send emails directly from Google Drive, and more.
  9. Use the “Activity” feature: Google Drive’s “Activity” feature allows you to see who has accessed a file, when they accessed it, and what changes they made.
  10. Use the “Google Backup and Sync” app: Google Backup and Sync is a handy app that allows you to automatically back up specific folders from your computer to your Google Drive. This way, you can be sure that your important files are always safe and accessible.

These are some of the most useful tips and tricks for getting the most out of Google Drive. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your files.

How do you insert an image into a slide on Google Drive?

To insert an image into a slide on Google Drive, you can use the Google Slides app. Here are the steps:

  1. Open Google Slides in your browser and navigate to the presentation you want to add the image to.
  2. Select the slide you want to add the image to.
  3. Click on the “Insert” menu at the top of the screen.
  4. Select “Image” from the drop-down menu.
  5. Choose the option to “Upload” an image, then select the image you want to insert from your computer.
  6. You can also select “By URL” if you have image link and paste the image link.
  7. Drag the image around the slide to reposition it or use the handles to resize it.
  8. Once you have the image positioned and sized the way you want, you can add text or other elements to the slide as needed.

Alternatively, you can also Drag and drop an image directly from your computer to your slide.


It should be noted that the slide should be in edit mode, otherwise you will not be able to insert image.

How can you rotate an image in Google Drive without having to download it first?

You can rotate an image in Google Drive without downloading it by using the “Preview” feature. To do this, follow these steps:

  1. Open Google Drive and navigate to the folder containing the image you want to rotate.
  2. Click on the image to open it in the “Preview” mode.
  3. Click on the “Tools” button in the top-right corner of the screen.
  4. Click on “Rotate” from the menu that appears.
  5. Select the desired rotation angle.
  6. Click on the “Save” button to save the changes to the image.

Alternatively, if you want to rotate multiple images at once, you can select the images you want to rotate and then right click on the selected files and select rotate or use the key board shortcuts Ctrl+Shift+T


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

You can also use many other editing tools within the preview itself to edit images.

Can you create documents directly from Google Drive?

Yes, it is possible to create documents directly from Google Drive. Google Drive is a cloud-based storage service provided by Google that allows users to store, share, and access files from any device. It also includes a suite of productivity tools, including Google Docs, Google Sheets, and Google Slides, that allow users to create, edit, and collaborate on documents, spreadsheets, and presentations, respectively.

To create a new document in Google Drive, you can follow these steps:

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

  1. Open Google Drive by going to drive.google.com or by opening the Google Drive app on your device.
  2. Click on the “+ New” button on the top left corner of the screen.
  3. Select “Google Docs”, “Google Sheets” or “Google Slides” from the drop-down menu.
  4. A new document will be created and will open in a new tab.

You can also create a new document by right-clicking on the Google Drive window and selecting “New” from the context menu. The new document will be saved to your Google Drive and can be accessed, edited, and shared with others. You can also upload existing documents to Google Drive and convert them to Google Docs, Sheets or Slides format to edit them collaboratively.

Do you need a Google Drive account to view files that are shared with you?

No, you do not need a Google Drive account to view files that are shared with you. If someone shares a file with you on Google Drive, they can give you access to it by sending you a link to the file, or by adding you as a collaborator. When you click on the link, you can view the file in your browser without having to sign in to a Google account.

However, if the file is shared with you as “view-only” and the owner of the file has enabled the option of “Restrict editing” you will only be able to view the file and cannot download, print or copy it. If you want to have the full access of the file and also want to collaborate on it, you will need to sign in to a Google account or create a new one.

It is important to note that the shared link may be password protected, or the link may expire after a certain period of time. Additionally, if the person sharing the file has enabled access restrictions, such as only allowing certain people or certain domains to access the file, you may not be able to view the file if you do not meet those requirements.

How can you edit documents with sensitive information on Google Drive?

There are several ways to edit documents with sensitive information on Google Drive:

  1. Use Google’s built-in security features: Google Drive offers several security features that can help protect sensitive information, such as two-factor authentication, password-protected sharing, and remote wipe. These features can help keep your documents secure while you’re editing them.
  2. Use a password-protected file format: Many file formats, such as Microsoft Office or PDF, allow you to set a password to protect the document from unauthorized access. This means that even if someone gains access to your Google Drive account, they won’t be able to view or edit the document without the password.
  3. Use a third-party encryption tool: you can use third-party encryption tools to encrypt the documents before uploading them to Google Drive. This ensures that the documents are secure even if someone gains access to your Google Drive account.
  4. Limit access to specific people: you can use Google Drive’s sharing feature to limit access to specific people, such as people within your organization or specific email addresses. This ensures that only authorized individuals can view or edit the document.
  5. Use Google Workspace’s additional security features: Google Workspace (previously G Suite) offers additional security features such as data loss prevention, advanced threat protection, and compliance with industry standards like HIPAA and SOC2.

It’s important to note that while these methods can help protect sensitive information, they are not foolproof and it’s always a good idea to review the security and privacy settings and to use strong, unique passwords and two-factor authentication to help protect your data.

Top 10 Google Docs Tips and Tricks

  1. Use keyboard shortcuts: Google Docs has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action, or “Ctrl + Shift + V” to paste text without formatting.
  2. Collaborate in real-time: With Google Docs, you can work on documents with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
  3. Use the “Explore” feature: The Explore feature in Google Docs can help you research and write documents more quickly by suggesting relevant information, images, and citations.
  4. Use the “Research” feature: You can use the “Research” feature in Google Docs to find and insert quotes or information from external sources directly into your document.
  5. Use templates: Google Docs has a wide variety of templates available for different types of documents, such as resumes, letters, and more. These templates can help you get started quickly and ensure a professional look for your document.
  6. Use the “Voice Typing” feature: Google Docs has a built-in “Voice Typing” feature that allows you to dictate text into your document using your voice. This can be a great way to write more quickly, or to transcribe an audio recording.
  7. Use the “Add-ons” feature: You can use Google Docs’ Add-ons feature to add extra functionality to your documents, such as the ability to sign PDFs, create diagrams, and more.
  8. Use the “Commenting” feature: The commenting feature in Google Docs allows you to leave feedback or suggestions directly on a document, making it easy for others to see and respond to your comments.
  9. Use the “Track Changes” feature: The “Track Changes” feature in Google Docs allows multiple people to collaborate on a document and see each other’s changes and suggestions, but still keep the original document intact.
  10. Use the “Headings” feature: Using headings in Google Docs can help structure and organize your documents, making them more readable and easier to navigate. You can format text as headings, then use the “Table of Contents” feature to create a table of contents for the document based on the headings.

These are some of the most useful tips and tricks for getting the most out of Google Docs. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your writing and editing process.

How do we upload a large file on Google Docs? I am trying to upload a 316 page file but only 63 pages are uploading.

There are a few things you can try to upload a large file on Google Docs:

  • Use the Google Drive app: Google Drive app allows you to upload files up to 5 TB in size. You can download it from the Google Play or App Store and then use it to upload your large file.
  • Zip the file: Compress your file into a .zip or .rar file and then upload it to Google Drive. Once the file is uploaded, you can unzip it and open it in Google Docs.
  • Convert the file: If the file is in a format that is not compatible with Google Docs, convert it to a compatible format (such as .docx or .pdf) and then upload it.
  • Split the file: If you are unable to upload the file in one go, you can split it into smaller parts and upload them separately. Once all the parts are uploaded, you can merge them in Google Docs.
  • Check your internet connection: A weak internet connection can cause issues with uploading large files. Ensure that you are connected to a stable and fast internet connection.
  • Try using Google Chrome browser: Some users have reported that using Chrome browser instead of other browsers such as Firefox or Safari can help with uploading large files.

It’s worth noting that Google Docs has a maximum file size of 5 TB and if your file exceeds that size, you will not be able to upload it. In addition, make sure to also check if you have enough storage available in your Google Drive account.

Top 10 Google Slides Tips and Tricks

  1. Use keyboard shortcuts: Google Slides has a variety of keyboard shortcuts that can help you work faster and more efficiently. For example, you can use “Ctrl + Shift + T” to undo the last action, or “Ctrl + Shift + V” to paste text without formatting.
  2. Collaborate in real-time: With Google Slides, you can work on presentations with other people at the same time, and see each other’s changes as they happen. This can be a great way to collaborate on projects with team members or classmates.
  3. Use the “Explore” feature: The Explore feature in Google Slides can help you research and write your presentation more quickly by suggesting relevant information, images, and citations.
  4. Use templates: Google Slides has a wide variety of templates available for different types of presentations, such as business, education, and more. These templates can help you get started quickly and ensure a professional look for your presentation.
  5. Use the “Add-ons” feature: You can use Google Slides’ Add-ons feature to add extra functionality to your presentations, such as the ability to create charts, diagrams, and more.
  6. Use the “Master” feature: The Master feature in Google Slides allows you to create a template slide, with a specific layout and design, that can be reused across multiple slides in the same presentation, making it easy to maintain consistency.
  7. Use the “Speaker Notes” feature: The “Speaker Notes” feature in Google Slides allows you to write notes for yourself about what you want to say for each slide, which can be helpful when giving a presentation.
  8. Use the “Animations” feature: Google Slides allows you to add animations to elements on your slide, to make your presentation more dynamic and engaging.
  9. Use the “Transitions” feature: The Transitions feature in Google Slides allows you to add effects between slides, such as fade, dissolve, and more, giving your presentation a polished look.
  10. Use the “Presenter View” feature: The “Presenter View” feature in Google Slides allows you to see the current slide, the next slide, your speaker notes, and a timer while presenting, so you can stay on track and keep your audience engaged.

These are some of the most useful tips and tricks for getting the most out of Google Slides. It’s a powerful tool that offers a lot of features, and learning how to use them can help you be more productive and organized with your presentation-making process.

Top 10 Google Forms Tips and Tricks

Here are ten tips and tricks for using Google Forms:

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

  1. Use “Go to section based on answer” to create a branching form, where the questions a respondent sees are based on their previous answers.
  2. Use the “Required” option to ensure that respondents complete certain questions before submitting the form.
  3. Use the “Data validation” option to ensure that respondents enter certain types of information, such as a valid email address or a number within a certain range.
  4. Use the “Randomize order of questions” option to randomize the order of questions for each respondent, which can help prevent bias in your data.
  5. Use the “Limit to one response” option to ensure that each respondent can only submit the form once.
  6. Use the “Add collaboration” option to share the form with others and collaborate on it in real time.
  7. Use the “Schedule Form” option to automatically close your form on a specific date and time, or after a certain number of responses have been received.
  8. Use the “Autocomplete” option to make it easier for respondents to enter frequently used or personal information.
  9. Use the “File upload” option to collect files and documents from respondents, such as images or PDFs.
  10. Use the “Create a quiz” option to create a multiple-choice or checkbox quiz, and then use the “Grade” option to automatically grade the quiz and provide feedback to respondents.

Is there a way to find out the number of respondents in Google Forms without opening each respondent’s response?

Yes, there is a way to find out the number of respondents in Google Forms without opening each respondent’s response. You can view the summary of responses in the “Responses” tab of the Google Form. The summary will show the number of responses received, as well as the option to view the responses in a spreadsheet format. You can also filter the responses based on various criteria and download them to your computer. Additionally, you can use Google Forms add-ons such as “Form Publisher” or “FormMule” which allows you to send the responses to a Google Sheets or Excel, and then use the spreadsheet functions to analyse the data.

Top 10 Google Sheets Tips and Tricks

Here are ten tips and tricks for using Google Sheets:

  1. Use keyboard shortcuts to quickly navigate and perform common actions, such as ctrl+c to copy, ctrl+v to paste, and ctrl+z to undo.
  2. Use the “=QUERY” function to quickly filter and sort large data sets, similar to a SQL query.
  3. Use the “=IMPORTXML” function to import structured data from websites, such as stock prices or weather data.
  4. Use the “=IMPORTRANGE” function to import data from other sheets, such as data from a master sheet that is shared with multiple team members.
  5. Use the “=IF” function to perform basic calculations, such as calculating sales tax or commission.
  6. Use the “=SUMIF” and “=COUNTIF” functions to perform mathematical operations based on a certain condition, such as summing all numbers in a range that are greater than a certain value.
  7. Use the “=Vlookup” function to lookup and retrieve data from other sheets or even other documents.
  8. Use the “=HLOOKUP” function to do a horizontal lookup.
  9. Use the “Data validation” option to ensure that data entered in a certain range of cells meets certain conditions, such as being a whole number or a date within a certain range.
  10. Use the “Conditional formatting” option to format cells based on their contents, such as making all negative numbers red, or highlighting cells that contain a certain keyword.

How can I import LinkedIn searches into Google Sheets?

There are a few different ways to import LinkedIn searches into Google Sheets:

  1. Use a LinkedIn scraper tool: There are a number of LinkedIn scraper tools available online that can be used to scrape data from LinkedIn searches and export it to Google Sheets. Some popular options include Hunter.io, Skrapp.io, and LeadLeaper.
  2. Use a LinkedIn API: LinkedIn offers an API that allows developers to access data from LinkedIn searches. You can use this API to extract data from LinkedIn searches and import it into Google Sheets using a script or a tool like Import.io
  3. Use Google Sheets Add-ons: There are several add-ons available for Google Sheets that allow you to import data from LinkedIn searches. Some popular options include Hunter, LinkedIn Sales Navigator, and LinkedIn Lead Gen Forms.
  4. Use a manual copy-paste method: You can also use a manual copy-paste method to import LinkedIn searches into Google Sheets. You can perform a search on LinkedIn, go through the results, and copy-paste the data you want into a Google Sheet.

Please note that some of these methods may require a LinkedIn premium account or may have limitations on the amount of data that can be scraped. Also, some scraping methods may violate LinkedIn terms of service.

Top 10 Google Search Tips and Tricks

What are top 10 Google Search Tips and Tricks that very few people know about?

  1. Use quotation marks to search for an exact phrase: If you want to search for a specific phrase, enclose it in quotation marks. Example: “Google Search Tips and Tricks”
  2. Use the minus sign to exclude specific words: If you want to exclude specific words from your search, use the minus sign (-) before the word you want to exclude. Example: Google Search Tips and Tricks – few
  3. Use the site: operator to search within a specific website: If you want to search for something within a specific website, use the site: operator followed by the website’s URL. Example: Google Search Tips site:www.example.com
  4. Use the filetype: operator to find specific file types: If you want to find a specific file type, use the filetype: operator followed by the file extension. Example: Google Search Tips filetype:pdf
  5. Use the related: operator to find related websites: If you want to find websites related to a specific website, use the related: operator followed by the website’s URL. Example: related:www.example.com
  6. Use the define: operator to find definitions: If you want to find the definition of a word, use the define: operator followed by the word. Example: define:Google
  7. Use the link: operator to find websites that link to a specific website: If you want to find websites that link to a specific website, use the link: operator followed by the website’s URL. Example: link:www.example.com
  8. Use the cache: operator to view a website’s cached version: If you want to view a website’s cached version, use the cache: operator followed by the website’s URL. Example: cache:www.example.com
  9. Use the intext: operator to search for specific words within a webpage: If you want to search for specific words within a webpage, use the intext: operator followed by the word. Example: intext:Google Search Tips
  10. Use the inurl: operator to search for specific words within a URL: If you want to search for specific words within a URL, use the inurl: operator followed by the word. Example: inurl:Google Search Tips

These are just a few of the many advanced search techniques that can be used on Google, and can help you find more specific and relevant results. Keep in mind that Google’s search algorithm is constantly evolving so some of the tips may not work as expected, but they’re still worth trying.

What challenges remain in advancing the safety and privacy features of Google Images?

There are several challenges that remain in advancing the safety and privacy features of Google Images:

  1. Identifying and removing inappropriate content: Identifying and removing inappropriate content, such as child sexual abuse material, remains a major challenge for Google Images. Despite the use of machine learning algorithms and human moderators, it can be difficult to accurately identify and remove all inappropriate content.
  2. Protecting personal privacy: Protecting the privacy of individuals whose images appear on Google Images is also a challenge. Google has implemented features such as “SafeSearch” to help users filter out explicit content, but there remains a risk that sensitive personal information could be exposed through reverse image searches.
  3. Dealing with misinformation: Google Images is also facing challenges in dealing with misinformation, as false or misleading information can be spread through images.
  4. Balancing user’s rights with copyright infringement: Balancing the rights of users to access and share information with the rights of copyright holders to protect their work is a challenging issue for Google Images. Google has implemented a copyright removal process, but it can be difficult to effectively enforce copyright infringement on a large scale.
  5. Addressing the issue of deepfakes: With the advent of deepfakes, images can be manipulated and deepfake images can be difficult to detect, this is a new challenge for Google Images to address.
  6. Addressing the needs of visually impaired: Making sure that the images in Google Images are accessible to the visually impaired is another important challenge for Google.

Google continue to invest in developing technology and policies to address these challenges and to ensure the safety and privacy of users on their platform. However, given the scale and complexity of these issues, it is likely that challenges will continue to arise in the future.

AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Azure Google Cloud Certifications Testimonials and Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.


https://apps.apple.com/ca/app/djamgatech-pro/id1574297762
AWS Azure Google Cloud Certifications Testimonials and Dumps
AWS Developer Associates DVA-C01 PRO
 

PASSED AWS CCP (2022)

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.

Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.

Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.

Just Passed My CCP exam today (within 2 weeks)

I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Resources:

-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)

-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)

Previous Aws knowledge:

-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)

Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)

Links to resources:

https://www.udemy.com/course/aws-certified-cloud-practitioner-new/

https://tutorialsdojo.com/courses/aws-certified-cloud-practitioner-practice-exams/

I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.

Passed – CCP CLF-C01

 

What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Passed with two weeks of prep (after work and weekends)

Resources Used:

  • https://www.exampro.co/

    • This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.

  • Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**

    • These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.

Pass AWS CCP. The score is beyond expected

I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier.
Just a little bit of sharing, now I’ll find something to continue ^^

Good luck with your own exams.

Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.

Background

– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).

– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.

Study stats

– Spent three weeks studying for the exam.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

– Studied an hour to two every day.

– Solved 800-1000 practice questions.

– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.

– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.

– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.

– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.

– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.

Resources used

– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal

– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis

– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*

– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**

– One or two free practice exams found by a quick Google search

*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.

**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.

Notes

– My study regimen (i.e., an hour to two every day for three weeks) was overkill.

– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.

– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.

What would I do differently?

– Focus on practice tests only. No video lectures.

– Focus on the technologies domain. You can intuit your way through questions in the other domains.

– Chill

What are the Top 100 AWS jobs you can get with an AWS certification in 2022 plus AWS Interview Questions
AWS SAA-C02 SAA-C03 Exam Prep

Just passed SAA-C03, thoughts on it

 
  • Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.

  • The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.

  • It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.

I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.

Passed SAA-C03 – Feedback

Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.

I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).

Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.

My Study Guide for passing the SAA-C03 exam

Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.

First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.

SAA-C03 Exam Prep

For my exam prep, I bought the adrian cantrill video coursetutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.

TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.

For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.

The Actual SAA-C03 Exam

The actual AWS exam is almost the same with the ones in the TD tests where:

  • All of the questions are scenario-based

  • There are two (or more) valid solutions in the question, e.g:

    • Need SSL: options are ACM and self-signed URL

    • Need to store DB credentials: options are SSM Parameter Store and Secrets Manager

  • The scenarios are long-winded and asks for:

    • MOST Operationally efficient solution

    • MOST cost-effective

    • LEAST amount overhead

Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.

Another Passed SAA-C03?

Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.

Background:

– graduate with networking background

– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.

– cloud experience, short period like 3-6 months with practice

– provisioned cloud application using terraform in azure and aws

Course that I used fully:

– AWS Certified Solutions Architect – Associate (SAA-C03) | learn.cantri (cantrill.io)

– AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path (tutorialsdojo.com)

Course that I used partially or little:

– Ultimate AWS Certified Solutions Architect Associate (SAA) | Udemy

– Practice Exams | AWS Certified Solutions Architect Associate | Udemy

Lab that I used:

– Free tier account with cantrill instruction

– Acloudguru lab and sandbox

– Percepio lab

Comment on course:

cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more

tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.

udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.

lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal

Advice:

you need to know some general exam topics like how to:

– s3 private access

– ec2 availability

– kinesis product including firehose, data stream, blabla

– iam

My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.

Good luck anyone!

Passed SAA

I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.

I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.

My company is offering stipend’s for each certification, so I’m going straight to developer next.

Recently passed SAA-C03

Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.

I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.

Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:

* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.

* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.

* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.

Not sure if they are graded or Amazon testing out new parts.

Cheers.

Another SAP-C01-Pass

Received my notification this morning that I passed 811.

Prep Time: 10 weeks 2hrs a day

Materials: Neil Davis videos/practice exam Jon Bonso practice exams White papers Misc YouTube videos Some hands on

Prof Experience: 4 years AWS using main services as architect

AWS Certs: CCP-SSA-DVA-SAP(now)

Thoughts: Exam was way more familiar to me than the Developer Exam. I use very little AWS developer tools but mainly use core AWS services. Neil’s videos were very straightforward, easy to digest, and on point. I was able to watch most of the videos on a plane flight to Vegas.

After video series I started to hit his section based exams, main exam, notes, and followed up with some hands on. I was getting destroyed on some of the exams early on and had to rewatch and research the topics, writing notes. There is a lot of nuance and fine details on the topics, you’ll see this when you take the practice exam. These little details matter.

Bonso’s exam were nothing less than awesome as per usual. Same difficulty and quality as Neil Davis. Followed the same routine with section based followed by final exam. I believe Neil said to aim for 80’s on his final exams to sit for the exam. I’d agree because that’s where I was hitting a week before the exam (mid 80’s). Both Neil and Jon exams were on par with exam difficulty if not a shade more difficult.

The exam itself was very straightforward. My experience is the questions were not overly verbose and were straight to the point as compared to the practice exams I took. I was able to quickly narrow down the questions and make a selection. Flagged 8 questions along the way and had 30min to review all my answers. Unlike some people, I didn’t feel like it was a brain melter and actually enjoyed the challenge. Maybe I’m a sadist who knows.

Advice: Follow Neil’s plan, bone up on weak areas and be confident. These questions have a pattern based upon the domain. Doing the practice exams enough will allow you to see the pattern and then research will confirm your suspicions. You can pass this exam!

Good luck to those preparing now and god speed.

 
AWS Developer Associate DVA-C01 Exam Prep
 
 
 

I Passed AWS Developer Associate Certification DVA-C01 Testimonials

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers
AWS Developer Associate DVA-C01 Exam Prep

Passed DVA-C01

Passed the certified developer associate this week.

Primary study was Stephane Maarek’s course on Udemy.

I also used the Practice Exams by Stephane Maarek and Abhishek Singh.

I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.

The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.

Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.

Cleared AWS Certified Developer – Associate (DVA-C01)

 

I cleared Developer associate exam yesterday. I scored 873.
Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool)
3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.

Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.

Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.

Another Passed Associate Developer Exam (DVA-C01)

Already had passed the Associate Architect Exam (SA-C03) 3 months ago, so I got much more relaxed to the exam, I did the exam with Pearson Vue at home with no problems. Used Adrian Cantrill for the course together with the TD exams.

Studied 2 weeks a 1-2 hours since there is a big overlap with the associate architect couse, even tho the exam has a different approach, more focused on the Serverless side of AWS. Lots of DynamoDB, Lambda, API Gateway, KMS, CloudFormation, SAM, SSO, Cognito (User Pool and Identity Pool), and IAM role/credentials best practices.

I do think in terms of difficulty it was a bit easier than the Associate Architect, maybe it is made up on my mind as it was my second exam so I went in a bit more relaxed.

Next step is going for the Associate Sys-Ops, I will use Adrian Cantrill and Stephane Mareek courses as it is been said that its the most difficult associate exam.

Passed the SCS-C01 Security Specialty 

Passed the SCS-C01 Security Specialty
Passed the SCS-C01 Security Specialty

Mixture of Tutorial Dojo practice exams, A Cloud Guru course, Neal Davis course & exams helped a lot. Some unexpected questions caught me off guard but with educated guessing, due to the material I studied I was able to overcome them. It’s important to understand:

  1. KMS Keys

    1. AWS Owned Keys

    2. AWS Managed KMS keys

    3. Customer Managed Keys

    4. asymmetrical

    5. symmetrical

    6. Imported key material

    7. What services can use AWS Managed Keys

  2. KMS Rotation Policies

    1. Depending on the key matters the rotation that can be applied (if possible)

  3. Key Policies

    1. Grants (temporary access)

    2. Cross-account grants

    3. Permanent Policys

    4. How permissions are distributed depending on the assigned principle

  4. IAM Policy format

    1. Principles (supported principles)

    2. Conditions

    3. Actions

    4. Allow to a service (ARN or public AWS URL)

    5. Roles

  5. Secrets Management

    1. Credential Rotation

    2. Secure String types

    3. Parameter Store

    4. AWS Secrets Manager

  6. Route 53

    1. DNSSEC

    2. DNS Logging

  7. Network

    1. AWS Network Firewall

    2. AWS WAF (some questions try to trick you into thinking AWS Shield is needed instead)

    3. AWS Shield

    4. Security Groups (Stateful)

    5. NACL (Stateless)

    6. Ephemeral Ports

    7. VPC FlowLogs

  8. AWS Config

    1. Rules

    2. Remediation (custom or AWS managed)

  9. AWS CloudTrail

    1. AWS Organization Trails

    2. Multi-Region Trails

    3. Centralized S3 Bucket for multi-account log aggregation

  10. AWS GuardDuty vs AWS Macie vs AWS Inspector vs AWS Detective vs AWS Security Hub

It gets more in depth, I’m willing to help anyone out that has questions. If you don’t mind joining my Discord to discuss amongst others to help each other out will be great. A study group community. Thanks. I had to repost because of a typo 🙁

https://discord.gg/pZbEnhuEY9

Passed the Security Specialty

Passed Security Specialty yesterday.

Resources used were:

Adrian (for the labs), Jon (For the Test Bank),

Total time spent studying was about a week due to the overlap with the SA Pro I passed a couple weeks ago.

Now working on getting Networking Specialty before the year ends.

My longer term goal is to have all the certs by end of next year.

 

Advanced Networking - Specialty

Advanced Networking – Specialty

Passed AWS Certified advanced networking – Specialty ANS-C01 2 days ago

 

This was a tough exam.

Here’s what I used to get prepped:

Exam guide book by Kam Agahian and group of authors – this just got released and has all you need in a concise manual, it also included 3 practice exams, this is a must buy for future reference and covers ALL current exam topics including container networking, SD-WAN etc.

Stephane Maarek’s Udemy course – it is mostly up-to-date with the main exam topics including TGW, network firewall etc. To the point lectures with lots of hands-on demos which gives you just what you need, highly recommended as well!

Tutorial Dojos practice tests to drive it home – this helped me get an idea of the question wording, so I could train myself to read fast, pick out key words, compare similar answers and build confidence in my knowledge.

Crammed daily for 4 weeks (after work, I have a full time job + family) and went in and nailed it. I do have networking background (15+ years) and I am currently working as a cloud security engineer and I’m working with AWS daily, especially EKS, TGW, GWLB etc.

For those not from a networking background – it would definitely take longer to prep.

Good luck!

 
 
 
 
Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

 

Passed AZ-900, SC-900, AI-900, and DP-900 within 6 weeks!

 
Achievement Celebration

What an exciting journey. I think AZ-900 is the hardest probably because it is my first Microsoft certification. Afterwards, the others are fair enough. AI-900 is the easiest.

I generally used Microsoft Virtual Training Day, Cloud Ready Skills, Measureup and John Savill’s videos. Having built a fundamental knowledge of the Cloud, I am planning to do AWS CCP next. Wish me luck!

Passed Azure Fundamentals

 
Learning Material

Hi all,

I passed my Azure fundamentals exam a couple of days ago, with a score of 900/1000. Been meaning to take the exam for a few months but I kept putting it off for various reasons. The exam was a lot easier than I thought and easier than the official Microsoft practice exams.

Study materials;

  • A Cloud Guru AZ-900 fundamentals course with practice exams

  • Official Microsoft practice exams

  • MS learning path

  • John Savill’s AZ-900 study cram, started this a day or two before my exam. (Highly Recommended) https://www.youtube.com/watch?v=tQp1YkB2Tgs&t=4s

Will be taking my AZ-104 exam next.

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep

Passed AZ-104 with about a 6 weeks prep

 
Learning Material

Resources =

John Savill’s AZ-104 Exam Cram + Master Class Tutorials Dojo Practice Exams

John’s content is the best out there right now for this exam IMHO. I watched the cram, then the entire master class, followed by the cram again.

The Tutorials Dojo practice exams are essential. Some questions on the actual exam where almost word-for-word what I saw on the exam.

Question:

What’s everyone using for the AZ-305? Obviously, already using John’s content, and from what I’ve read the 305 isn’t too bad.

Thoughts?

Passed the AZ-140 today!!

 
Achievement Celebration

I passed the (updated?) AZ-140, AVD specialty exam today with an 844. First MS certification in the bag!

Edited to add: This video series from Azure Academy was a TON of help.

https://youtube.com/playlist?list=PL-V4YVm6AmwW1DBM25pwWYd1Lxs84ILZT

Passed DP-900

 
Achievement Celebration

I am pretty proud of this one. Databases are an area of IT where I haven’t spent a lot of time, and what time I have spent has been with SQL or MySQL with old school relational databases. NoSQL was kinda breaking my brain for a while.

Study Materials:

  1. Microsoft Virtual Training Day, got the voucher for the free exam. I know several people on here said that was enough for them to pass the test, but that most certainly was not enough for me.

  2. Exampro.co DP-900 course and practice test. They include virtual flashcards which I really liked.

  3. Whizlabs.com practice tests. I also used the course to fill in gaps in my testing.

Passed AI-900! Tips & Resources Included!!

Azure AI Fundamentals AI-900 Exam Prep
Azure AI Fundamentals AI-900 Exam Prep
 
Achievement Celebration

Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900.

Here’s the order in which I passed my AWS and Azure certifications:

SAA>DVA>SOA>DOP>SAP>CLF|AZ-900>DP-900>AI-900

I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can.

Here’s my study plan for AZ-900 and DP-900 exams:

  • finish a popular video course aimed at the cert

  • watch John Savill’s study/exam cram

  • take multiple practice exams scoring in 90s

This is what I used for AI-900:

  • Alan Rodrigues’ video course (includes 2 practice exams) 👌

  • John Savill’s study cram 💪

  • practice exams by Scott Duffy and in 28Minutes Official 👍

  • knowledge checks in AI modules from MS learn docs 🙌

I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes.

https://youtu.be/utknpvV40L0 👏

Just be clear on the topics explained by the above video and you’ll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam

Just passed AZ-104

 
Achievement Celebration

I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs.

Received very little of this:

  • Containers

  • Storage

  • Monitoring

I passed with a 710 but a pass is a pass haha.

Used tutorial dojos but the closest questions I found where in the Udemy testing exams.

Regards,

Passed GCP Professional Cloud Architect

Google Professional Cloud Architect Practice Exam 2022
Google Professional Cloud Architect Practice Exam 2022
 

First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.

I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.

In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.

As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.

I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.

One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.

I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.

All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.

GCP Associate Cloud Engineer Exam Prep

Passed GCP: Cloud Digital Leader

Hi everyone,

First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience.

Preparation

I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet.

AGU also has three – 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren’t that hard).

I don’t know if someone could pass the test if they just watched the videos on Google Cloud’s certification site, especially if you had no experience with GCP.

Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I’m working on my CCSP. After taking the test, I realized I way over prepared.

Exam Center

It was my first time at this testing center and I wasn’t happy with the experience. A few of the issues I had are:

– My personal items (phone, keys) were placed in an unlocked filing cabinet

– My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space

– The laminated sheet they gave me looked like someone had spilled Kool-Aid on it

– They only offered earplugs, instead of noise cancelling headphones

Exam

My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do.

I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer.

Passed the Google Cloud: Associate Cloud Engineer

Hey all, I was able to pass the Google Cloud: Associate Cloud Engineer exam in 27 days.

I studied about 3-5 hours every single day.

I created this note to share with the resources I used to pass the exam.

Happy studying!

GCP ACE Exam Aced

Hi folks,

I am glad to share with you that I have cleared by GCP ACE exam today and would like to share my preparation with you:

1)I completed these courses from Coursera:

1.1 Google Cloud Platform Fundamentals – Core Infrastructure

1.2 Essential Cloud Infrastructure: Foundation

1.3 Essential Cloud Infrastructure: Core Services

1.4 Elastic Google Cloud Infrastructure: Scaling and Automation

Post these courses, I did couple of QwikLab courses as listed in orderly manner:

2 Getting Started: Create and Manage Cloud Resources (Qwiklabs Quest)

   2.1 A Tour of Qwiklabs and Google Cloud

   2.2 Creating a Virtual Machine

   2.2 Compute Engine: Qwik Start – Windows

   2.3 Getting Started with Cloud Shell and gcloud

   2.4 Kubernetes Engine: Qwik Start

   2.5 Set Up Network and HTTP Load Balancers

   2.6 Create and Manage Cloud Resources: Challenge Lab

 3 Set up and Configure a Cloud Environment in Google Cloud (Qwiklabs Quest)

   3.1 Cloud IAM: Qwik Start

   3.2 Introduction to SQL for BigQuery and Cloud SQL

   3.3 Multiple VPC Networks

   3.4 Cloud Monitoring: Qwik Start

   3.5 Deployment Manager – Full Production [ACE]

   3.6 Managing Deployments Using Kubernetes Engine

   3.7 Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab

 4 Kubernetes in Google Cloud (Qwiklabs Quest)

   4.1 Introduction to Docker

   4.2 Kubernetes Engine: Qwik Start

   4.3 Orchestrating the Cloud with Kubernetes

   4.4 Managing Deployments Using Kubernetes Engine

   4.5 Continuous Delivery with Jenkins in Kubernetes Engine

Post these courses I did the following for mock exam preparation:

  1. Jon Bonso Tutorial Dojo -GCP ACE preparation

  2. Udemy course:

https://www.udemy.com/course/google-associate-cloud-engineer-practice-exams-2021-d/learn/quiz/5278722/results?expanded=591254338#overview

And yes folks this took me 3 months to prepare. So take your time and prepare it.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900 #DP900 #GCPACE

Comparison of AWS vs Azure vs Google

Cloud computing has revolutionized the way companies develop applications. Most of the modern applications are now cloud native. Undoubtedly, the cloud offers immense benefits like reduced infrastructure maintenance, increased availability, cost reduction, and many others.

However, which cloud vendor to choose, is a challenge in itself. If we look at the horizon of cloud computing, the three main providers that come to mind are AWS, Azure, and Google cloud. Today, we will compare the top three cloud giants and see how they differ. We will compare their services, specialty, and pros and cons. After reading this article, you will be able to decide which cloud vendor is best suited to your needs and why.

History and establishment

AWS

AWS is the oldest player in the market, operating since 2006. Here’s a brief history of AWS and how computing has changed. Being the first in the cloud industry, it has gained a particular advantage over its competitors. It offers more than 200+ services to its users. Some of its notable clients include:

  • Netflix
  • Expedia
  • Airbnb
  • Coursera
  • FDA
  • Coca Cola

Azure

Azure by Microsoft started in 2010. Although it started four years later than AWS, it is catching up quite fast. Azure is Microsoft’s public cloud platform which is why many companies prefer to use Azure for their Microsoft-based applications. It also offers more than 200 services and products. Some of its prominent clients include:

  • HP
  • Asus
  • Mitsubishi
  • 3M
  • Starbucks
  • CDC (Center of Disease Control) USA
  • National health service (NHS) UK

Google

Google Cloud also started in 2010. Its arsenal of cloud services is relatively smaller compared to AWS or Azure. It offers around 100+ services. However, its services are robust, and many companies embrace Google cloud for its specialty services. Some of its noteworthy clients include:

  • PayPal
  • UPS
  • Toyota
  • Twitter
  • Spotify
  • Unilever

Market share & growth rate

If you look at the market share and growth chart below, you will notice that AWS has been leading for more than four years. Azure is also expanding fast, but it is still has a long way to go to catch up with AWS.

However, in terms of revenue, Azure is ahead of AWS. In Q1 2022, AWS revenue was $18.44 billion; Azure earned $23.4 billion, while Google cloud earned $5.8 billion.

Availability Zones (Data Centers)

When comparing cloud vendors, it is essential to see how many regions and availability zones are offered. Here is a quick comparison between all three cloud vendors in terms of regions and data centers:

AWS

AWS operates in 25 regions and 81 availability zones. It offers 218+ edge locations and 12 regional edge caches as well. You can utilize the edge location and edge caches in services like AWS Cloudfront and global accelerator, etc.

Azure

Azure has 66 regions worldwide and a minimum of three availability zones in each region. It also offers more than 116 edge locations.

Google

Google has a presence in 27 regions and 82 availability zones. It also offers 146 edge locations.

Although all three cloud giants are continuously expanding. Both AWS and Azure offer data centers in China to specifically cater for Chinese consumers. At the same time, Azure seems to have broader coverage than its competitors.

Comparison of common cloud services

Let’s look at the standard cloud services offered by these vendors.

Compute

Amazon’s primary compute offering is EC2 instances, which are very easy to operate. Amazon also provides a low-cost option called “Amazon lightsail” which is a perfect fit for those who are new to computing and have a limited budget. AWS charges for EC2 instances only when you are using them. Azure’s compute offering is also based on virtual machines. Google is no different and offers virtual machines in Google’s data centers. Here’s a brief comparison of compute offerings of all three vendors:

Storage

All three vendors offer various forms of storage, including object-based storage, cold storage, file-based storage, and block-based storage. Here’s a brief comparison of all three:

Database

All three vendors support managed services for databases. They also offer NoSQL as well as document-based databases. AWS also provides a proprietary RDBMS named “Aurora”, a highly scalable and fast database offering for both MySQL and PostGreSQL. Here’s a brief comparison of all three vendors:

Comparison of Specialized services

All three major cloud providers are competing with each other in the latest technologies. Some notable areas of competition include ML/AI, robotics, DevOps, IoT, VR/Gaming, etc. Here are some of the key specialties of all three vendors.

AWS

Being the first and only one in the cloud market has many benefits, and Amazon has certainly taken advantage of that. Amazon has advanced specifically in AI and machine learning related tools. AWS DeepLens is an AI-powered camera that you can use to develop and deploy machine learning algorithms. It helps you with OCR and image recognition. Similarly, Amazon has launched an open source library called “Gluon” which helps with deep learning and neural networks. You can use this library to learn how neural networks work, even if you lack any technical background. Another service that Amazon offers is SageMaker. You can use SageMaker to train and deploy your machine learning models. It contains the Lex conversational interface, which is the backbone of Alexa, Lambda, and Greengrass IoT messaging services.

Another unique (and recent) offering from AWS is IoT twinmaker. This service can create digital twins of real-world systems like factories, buildings, production lines, etc.

AWS is even providing a service for Quantum computing called AWS Braket.

Azure

Azure excels where you are already using some Microsoft products, especially on-premises Microsoft products. Organizations already using Microsoft products prefer to use Azure instead of other cloud vendors because Azure offers a better and more robust integration with Microsoft products.

Azure has excellent services related to ML/AI and cognitive services. Some notable services include Bing web search API, Face API, Computer vision API, text analytics API, etc.

Google

Google is the current leader of all cloud providers regarding AI. This is because of their open-source Google library TensorFlow, the most popular library for developing machine learning applications. Vertex AI and BigQueryOmni are also beneficial services offered lately. Similarly, Google offers rich services for NLP, translation, speech, etc.

Pros and Cons

Let’s summarize the pros and cons for all three cloud vendors:

AWS

Pros:

  • An extensive list of services
  • Huge market share
  • Support for large businesses
  • Global reach

Cons:

  • Pricing model. Many companies struggle to understand the cost structure. Although AWS has improved the UX of its cost-related reporting in the AWS console, many companies still hesitate to use AWS because of a perceived lack of cost transparency

Azure

Pros:

  • Excellent integration with Microsoft tools and software
  • Broader feature set
  • Support for open source

Cons:

  • Geared towards enterprise customers

Google

Pros:

  • Strong integration with open source tools
  • Flexible contracts
  • Good DevOps services
  • The most cost-efficient
  • The preferred choice for startups
  • Good ML/AI-based services

Cons:

  • A limited number of services as compared to AWS and Azure
  • Limited support for enterprise use cases

Career Prospects

Keen to learn which vendor’s cloud certification you should go for ? Here is a brief comparison of the top three cloud certifications and their related career prospects:

AWS

As mentioned earlier, AWS has the largest market share compared to other cloud vendors. That means more companies are using AWS, and there are more vacancies in the market for AWS-certified professionals. Here are main reasons why you would choose to learn AWS:

Azure

Azure is the second largest cloud service provider. It is ideal for companies that are already using Microsoft products. Here are the top reasons why you would choose to learn Azure:

  • Ideal for experienced user of Microsoft services
  • Azure certifications rank among the top paying IT certifications
  • If you’re applying for a company that primarily uses Microsoft Services

Google

Although Google is considered an underdog in the cloud market, it is slowly catching up. Here’s why you may choose to learn GCP.

  • While there are fewer job postings, there is also less competition in the market
  • GCP certifications rank among the top paying IT certifications

Most valuable IT Certifications

Keen to learn about the top paying cloud certifications and jobs? If you look at the annual salary figures below, you can see the average salary for different cloud vendors and IT companies, no wonder AWS is on top. A GCP cloud architect is also one of the top five. The Azure architect comes at #9.

Which cloud certification to choose depends mainly on your career goals and what type of organization you want to work for. No cloud certification path is better than the other. What matters most is getting started and making progress towards your career goals. Even if you decide at a later point in time to switch to a different cloud provider, you’ll still benefit from what you previously learned.

Over time, you may decide to get certified in all three – so you can provide solutions that vary from one cloud service provider to the next.

Don’t get stuck in analysis-paralysis! If in doubt, simply get started with AWS certifications that are the most sought-after in the market – especially if you are at the very beginning of your cloud journey. The good news is that you can become an AWS expert when enrolling in our value-packed training.

Further Reading

You may also be interested in the following articles:

https://digitalcloud.training/entry-level-cloud-computing-jobs-roles-and-responsibilities/https://digitalcloud.training/aws-vs-azure-vs-google-cloud-certifications-which-is-better/https://digitalcloud.training/10-tips-on-how-to-enter-the-cloud-computing-industry/https://digitalcloud.training/top-paying-cloud-certifications-and-jobs/https://digitalcloud.training/are-aws-certifications-worth-it/

Source:

https://digitalcloud.training/comparison-of-aws-vs-azure-vs-google/


Get it on Apple Books
Get it on Apple Books

  • Free Google Cloud associate cloud engineer,professional cloud architect,security engineer practice tests.
    by /u/Just_Reaction_4469 (Google Cloud Platform Certification) on March 18, 2024 at 5:31 pm

    https://karaniph.com/google-cloud-associate-cloud-engineer-free-practice-test/ https://karaniph.com/google-cloud-professional-architect-free-practice-test/ https://karaniph.com/google-cloud-professional-cloud-security-engineer-practice-test/ submitted by /u/Just_Reaction_4469 [link] [comments]

  • The future of infrastructure modernization: how Google Cloud Innovators are embracing the cloud
    by (Training & Certifications) on March 18, 2024 at 4:00 pm

    Google Cloud Champion Innovators is a global network of roughly 600 external professionals who are technical experts in Google Cloud products and services. Each Champion specializes in one of nine different technical categories including Coud AI/ML, Data Analytics, Databases, Hybrid Multi-Cloud, Modern Architecture, Security and Networking, Serverless App Development, Storage, or Google Workspace. In this ongoing interview series we sit down with Champion Innovators across the world to learn more about their journeys, technology focus, and what excites them. Today we're talking to Rohan Singh, a Senior Cloud Infrastructure Engineer at SADA - An Insight Company, and a Google Cloud Champion Innovator specializing in Modern Architecture. He focuses on infrastructure modernization and migration, applying his expertise to help organizations embrace the transformative power of cloud technologies. Natalie Tack: What technology area are you most fascinated with, and why?  Rohan Singh: Quite simply, I'm passionate about the cloud and DevOps: every day brings fresh developments, and there's always something new to learn. As a Senior Cloud Infrastructure Engineer, my work involves helping clients with their application infrastructure, focusing on strategy planning and choosing the best approach to their specific modernization and migration needs. A lot of companies are still working with legacy infrastructure, and finding the right strategy for them is a challenge that I really enjoy. Containerization is really big at the moment, so I’m doing a lot of work with Google Kubernetes Engine (GKE) and Anthos and using Migrate for Compute Engine to transition VMs from on-premise to the Cloud.  NT: What are some upcoming trends you’re enthusiastic about? RS: I’m seeing more and more interest in Infrastructure as a Code (IaC). IaC enables you to increase your level of control over infrastructure implementation and design and improves resource management and code versioning. Google recently launched Infrastructure Manager, which uses Terraform and allows you to manage your Google Cloud infrastructure through IaC. As a Champion Innovator, I attended an internal session with the Google team where they presented the product and answered any queries we had. This was really useful, as I believe IaC is set to be a significant shift for the industry.  I’m also really excited about the potential of generative AI. As a Champion Innovator, I was granted early access to Duet AI in Google Cloud, which I see as a sort of personal assistant who’s on call 24/7 to answer my queries and help me solve any code challenges. I'm looking forward to finding out more about the potential integration of generative AI with Google Cloud’s native DevOps tools for infrastructure and applications. It’s early days, but the potential to leverage generative AI to deliver improved solutions to clients, enhance our solution architecture and troubleshoot issues is thrilling.  NT: How do you like to learn new services, applications and technologies? RS: My main resources are Google Cloud blogs, documentation, and my company's internal channels, where we continuously discuss cloud migration and modernization. Google Cloud release notes are great as the information is coming straight from the source. Google Cloud blog articles and Medium posts are also really useful. The Innovators program has gathered a wealth of technical expertise from Google Cloud itself and other Innovators around the world.  Being an Innovator also gives me the opportunity to practice extensively on Google Cloud, which is great as I’m a strong believer in learning by doing. As soon as I find out about something new, I like to put it into practice straight away. But while passion is crucial, so is paying attention to your wellbeing. I focus on my mental health because it keeps me and my family happy and enhances my productivity. When my mind is at peace, I can think more clearly and provide better solutions to our clients. NT: How has being part of the Innovators program impacted your personal and professional development? RS: Access to internal and Innovators-only training and certifications has really helped shape my approach to infrastructure modernization and migration. Another huge advantage is the robust community support. You’re connected with dozens of cloud enthusiasts and seasoned professionals who are there to support you.  Joining the program has also really expanded my horizons. I come from a small town where there aren’t many opportunities to interact with outsiders. Being an Innovator has boosted my confidence as well as my interpersonal and communication skills. I've also learned to articulate complex technical concepts in simple terms, which is really useful when working with non-technical colleagues and clients.  Having “Google Cloud Champion Innovator for Modern Architecture” in your bio also makes a real difference. People pay far more attention to what you’re saying! Since joining the program, my global reach on LinkedIn and Medium, and the queries I receive, have increased threefold.  NT: How important is community to you?  RS: Very important! I’m a strong advocate for community engagement. Cloud communities and programs like Innovators emphasize collaboration and knowledge sharing and connect people from diverse backgrounds. Regular interactions with others provide insights into challenges we’re all facing and are a great way to gain perspective.  I actively seek to share my stories and experiences and help people on the Google Cloud community page and via my Medium blog, where I run an interview series called “Silly Sit-Downs(SSD) with Rohan” with people from the industry irrespective of their domain. Video and podcasts are really big these days, but I truly believe in the power of reading and writing. Reading sparks the imagination, and it’s accessible to people everywhere as it doesn’t require a strong internet connection. I see myself as a mentor now, communicating globally without barriers. I’m not interested in follower numbers per se, but when I get messages from people saying my content has helped them it really makes my day.  NT: What advice would you give to budding Innovators?  RS: Pursue knowledge and skills, not titles. Don't choose a technology like cloud computing just because your friends are doing it or because you think you can make good money. Explore it first and if it interests you, start with the basics. Don't stress yourself out with unrealistic expectations, such as becoming a cloud expert in six months. Take your time to learn and grow at your own pace, stay active, travel, learn new things beyond technology, talk to people, and take breaks when necessary. Just getting started is difficult in its own right. For even more insight into how organizations are innovating with generative AI, check out our growing Innovators community – we appreciate ideas from like-minded Google Cloud users who want to take their journey to the next level.

  • A pass is a pass. DP-600
    by /u/Raging-Loner (Microsoft Azure Certifications) on March 18, 2024 at 1:21 pm

    submitted by /u/Raging-Loner [link] [comments]

  • MS-102 vs SC-400
    by /u/lighthills (Microsoft Azure Certifications) on March 18, 2024 at 1:44 am

    How does DLP and Compliance content compare between the two? What are the main differences between the purposes of the certifications? submitted by /u/lighthills [link] [comments]

  • Where to take practice exams for Az-900 fundamentals?
    by /u/elipr787 (Microsoft Azure Certifications) on March 18, 2024 at 12:12 am

    I finished the Microsoft Modules and started watching john savill course on yt but i have a lot of experience with Azure on my job and wanna start taking practice exams. Where can i take practice exams that are closest to the real thing? submitted by /u/elipr787 [link] [comments]

  • Is it possible for someone to determine if I used accommodation or not?
    by /u/Alex_df_300 (Microsoft Azure Certifications) on March 17, 2024 at 6:30 pm

    Exams are not available in my native language and according to this web page I can request additional time as an accommodation. At the same time, I would rather not request that accommodation if someone would be able able to determine that I have used it during exam (information on the certificate or any other way). My question is: Is it possible for someone to determine if I used accommodation or not? submitted by /u/Alex_df_300 [link] [comments]

  • After Az900: AZ104 or Az204 (with a developer background)
    by /u/maujavier91 (Microsoft Azure Certifications) on March 17, 2024 at 3:44 pm

    I'm new to azure and already passes AZ900, I want to get an associate cert I'm interested more in Az204, I'm a software developer and haven't done much system administration, which of AZ104 and Az204 would be the easiest to get next, I've read that as a dev it would be easier to get Az204 but a cloud guru recommends to get first the AZ104 first and then go for the Az204, is the knowledge of AZ104 needed to make the Az204 easier or viceversa, I plan to get both in the end but I would like to take them in order of difficulty submitted by /u/maujavier91 [link] [comments]

  • How to create AZ-104 and AZ-305 study plans?
    by /u/StarryBlep (Microsoft Azure Certifications) on March 17, 2024 at 2:29 pm

    Hello! I’m currently working on creating a study plan for AZ-104 and AZ-305 for 1-2 months. For context, i don’t have a tech background and am unfamiliar with these things. To clarify, i won’t be the one taking the exam; just the one drafting up the study plans. I’ve read through some of the threads here and did my own research; as i understand, the self-paced learning paths on the Microsoft website seem to downplay the difficulty of these exams. I’ve also seen a plethora of learning resources like John Savill’s videos, etc. But hoping to ask if anyone can help me figure out how i can jumpstart the plan, or how to structure it? TIA! submitted by /u/StarryBlep [link] [comments]

  • DP100 - Sources for study
    by /u/dorkmotter (Microsoft Azure Certifications) on March 17, 2024 at 7:10 am

    I am preparing for dp100. I saw on internet that MS Learn, Coursera and Udemy are all outdated. I am really stressed and confused. Is there a clear resource that can be used to prepare for the exam? submitted by /u/dorkmotter [link] [comments]

  • Student discount help
    by /u/PXE590t (Microsoft Azure Certifications) on March 17, 2024 at 1:55 am

    I’ve posted in the Microsoft support forum and they just repeat the same robotic answer. Just wondered if anyone has had this issue? Verified myself as a student but doesn’t show discount at checkout. Does AZ-900 not get a discount? I’ve attached a few screenshots, maybe I’m missing something? https://imgur.com/a/gNkcFal https://imgur.com/a/gekNktb https://imgur.com/a/75LbItV https://imgur.com/a/wwaR8t5 ​ submitted by /u/PXE590t [link] [comments]

  • Practice Exam Providers
    by /u/T-cona204 (Microsoft Azure Certifications) on March 16, 2024 at 8:11 pm

    In the end, the exam prep providers do a decent job with with the practice exams they offer, I find they are worth purchasing and the time I spend on these provider's test platforms has helped me pass every exam I wrote the first time. I however find that some of the things they add into the practice exams are not needed. Maybe they do this to pad numbers, it is impressive in advertising to offer up to 300+ prep exam questions only to discover that 10-15% of the questions are on topics the CURRENT cert exam does not cover. I suppose I provide this post for others to add their impressions and feedback on the exam prep providers they have used. Here's hoping we can all help each other be wiser to find the best materials from the best sources to help anybody, from a new IT tech to the veteran, learn and use. submitted by /u/T-cona204 [link] [comments]

  • Azure Admin Certified!
    by /u/Mammoth_Cry_711 (Microsoft Azure Certifications) on March 16, 2024 at 5:01 pm

    Successfully passed the AZ104 Azure Administrator on my first attempt with over 3 years experience. Tutorials Dojo was by far and away the best resource I’ve used in attempting this feat. Scott Duffy’s course on Udemy is also a great resource for those who don’t know where or how to start but it’s a nice refresher for those in the field currently. submitted by /u/Mammoth_Cry_711 [link] [comments]

  • Feel like I’m vastly underestimating ms-102
    by /u/ITnewb30 (Microsoft Azure Certifications) on March 16, 2024 at 4:50 pm

    I recently finished John Christopher’s ms-102 course on Udemy. I understand this is only one source, but the entire time I felt like I already knew how to do everything that he was covering in the course. I’ve worked in Azure/365 for two years and do a lot of labbing on top of it. I’m not a stranger to IT certifications. I have a lot of CompTIA certs and ITIL, Sscp and another Linux cert. I have never taken a Microsoft cert so I don’t really know what to expect. Are there better sources that anyone else used to study for this exam? I plan to go through the Learn material too, I just wanted to do a third party course first. submitted by /u/ITnewb30 [link] [comments]

  • SC-300 Unlocked!
    by /u/slash0514 (Microsoft Azure Certifications) on March 16, 2024 at 9:45 am

    Just passed SC-300. Ready for SC-100. ​ submitted by /u/slash0514 [link] [comments]

  • Passed the AZ-305 exam
    by /u/egbur (Microsoft Azure Certifications) on March 16, 2024 at 5:10 am

    After doing a bit of Azure-related work following my AZ-104 last month, I decided to take the plunge and do my AZ-305 while the content was still fresh in memory. I was really not confident because there is so much more content to learn, but I am glad that I did. My score was 880, and the speed at which I could answer questions that were very similar to the ones you would expect form AZ-104 definitely helped. My test had 53 questions and started with a case study which I think was 6 or 7 questions. I finished everything with about 50 minutes to spare, and I used another 15 to review the ones I wasn't so sure about. I wasn't aiming for a perfect score and I do wonder which ones I did not get right, but I did change a few answers based on what I found in MS Learn so the extra time was useful. Resources-wise, I completed the Microsoft Challenge by doing all the modules it asked. This came with a bonus 50% discount on the exam cost. Aside from that, I also did the Microsoft Press course in LinkedIn Learning. I have free access to that through my local library so that was helpful. I also did some practice exams from TD and MS Learn, and of course viewed the study cram and DP900 videos by u/JohnSavill, which are both great. Now I might take a breather before I decide what to focus on next. submitted by /u/egbur [link] [comments]

  • AZ-104 Labs & ARM Template on GitHub
    by /u/MARS822a (Microsoft Azure Certifications) on March 15, 2024 at 8:36 pm

    It's VERY possible that I'm missing something, but I believe this ARM template on GitHub for this lab is wrong. Am I crazy or does it only create one vnet, while the lab requires three. Some of the resource names are incorrect too. I went back through the commits and three weeks ago a template was deleted that appeared to contain the correct vnets. Am I just missing something or is the template actually broken? TIA for any insights! submitted by /u/MARS822a [link] [comments]

  • Shifting to Microsoft
    by /u/sevenfoldnomad (Microsoft Azure Certifications) on March 15, 2024 at 8:14 pm

    Hi, I am a network engineer with 6 years experience in both project implementation and operations support. I moved to a new area and it seems that networking related jobs are not in demand here (checked on job sites). I am seeing tech support/sysadmin roles (AD, DNS, DHCP, Azure). I don't have any experience on any desktop support related jobs or sysadmin. I took MD-102 since it is an entry level microsoft certification, thinking also that it could help me land a job here in my area. I am planning to take AZ-104. My question is, is it worth it to take AZ-104? Considering that it will be a mismatch with my background and experience. And employers will question it somehow if I have an Azure cert and yet 0 experience. submitted by /u/sevenfoldnomad [link] [comments]

  • Certification Renewal Exam
    by /u/BA-94 (Microsoft Azure Certifications) on March 15, 2024 at 6:15 pm

    Has anybody else found the certification renewal assessment more difficult than the original exam? Took me three attempts to renew my Az-104 certification. submitted by /u/BA-94 [link] [comments]

  • Clarifying user administrator role permissions - Can a user administrator add devices to a resource group? Asking because of a question on whizlabs for a practice AZ 104 exam which is saying they can in the solutions...
    by /u/grmahs (Microsoft Azure Certifications) on March 15, 2024 at 3:08 pm

    submitted by /u/grmahs [link] [comments]

  • Taking the online exam overseas
    by /u/FrozenFury12 (Microsoft Azure Certifications) on March 15, 2024 at 11:25 am

    Does anyone have an experience taking the exam using Pearson Vue overseas? Since it says " Price based on the country or region in which the exam is proctored. " would I be paying the price of where I am currently located ? Or should I be entering my home address ? ​ submitted by /u/FrozenFury12 [link] [comments]


Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03

How do we know that the Top 3 Voice Recognition Devices like Siri Alexa and Ok Google are not spying on us?

Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud

Djamgatech: AI Driven Continuing Education and Certification Preparation Platform

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Djamgatech – Multilingual and Platform Independent Cloud Certification and Education App for AWS Azure Google Cloud

Djamgatech is the ultimate Cloud Education Certification App. It is an EduFlix App for AWS, Azure, Google Cloud Certification Prep, School Subjects, Python, Math, SAT, etc. [Android, iOS]

Technology is changing and is moving towards the cloud. The cloud will power most businesses in the coming years and is not taught in schools. How do we ensure that our kids and youth and ourselves are best prepared for this challenge?

Building mobile educational apps that work offline and on any device can help greatly in that sense.

The ability to tab on a button and learn the cloud fundamentals and take quizzes is a great opportunity to help our children and youth to boost their job prospects and be more productive at work.


The App covers the following certifications :
AWS Cloud Practitioner Exam Prep CCP CLF-C01, Azure Fundamentals AZ 900 Exam Prep, AWS Certified Solution Architect Associate SAA-C02 Exam Prep, AWS Certified Developer Associate DVA-C01 Exam Prep, Azure Administrator AZ 104 Exam Prep, Google Associate Cloud Engineer Exam Prep, Data Analytics for AWS DAS-C01, Machine Learning for AWS and Google, AWS Certified Security – Specialty (SCS-C01), AWS Certified Machine Learning – Specialty (MLS-C01), Google Cloud Professional Machine Learning Engineer and more… [Android, iOS]

Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud
Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud

The App covers the following cloud categories:

AWS Technology, AWS Security and Compliance, AWS Cloud Concepts, AWS Billing and Pricing , AWS Design High Performing Architectures, AWS Design Cost Optimized Architectures, AWS Specify Secure Applications And Architectures, AWS Design Resilient Architecture, Development With AWS, AWS Deployment, AWS Security, AWS Monitoring, AWS Troubleshooting, AWS Refactoring, Azure Pricing and Support, Azure Cloud Concepts , Azure Identity, governance, and compliance, Azure Services , Implement and Manage Azure Storage, Deploy and Manage Azure Compute Resources, Configure and Manage Azure Networking Services, Monitor and Backup Azure Resources, GCP Plan and configure a cloud solution, GCP Deploy and implement a cloud solution, GCP Ensure successful operation of a cloud solution, GCP Configure access and security, GCP Setting up a cloud solution environment, AWS Incident Response, AWS Logging and Monitoring, AWS Infrastructure Security, AWS Identity and Access Management, AWS Data Protection, AWS Data Engineering, AWS Exploratory Data Analysis, AWS Modeling, AWS Machine Learning Implementation and Operations, GCP Frame ML problems, GCP Architect ML solutions, GCP Prepare and process data, GCP Develop ML models, GCP Automate & orchestrate ML pipelines, GCP Monitor, optimize, and maintain ML solutions, etc.. [Android, iOS]


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Cloud Education and Certification

The App covers the following Cloud Services, Framework and technologies:

AWS: VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, SES, EBS, ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, Simply Monthly Calculator, cost calculator, Ec2 pricing on-demand, IAM, AWS Pricing, Pay As You Go, No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Workspace, S3 storage classes, Regions, Availability Zones, Placement Groups, Amazon lightsail, Redshift, EC2 G4ad instances, DAAS, PAAS, IAAS, SAAS, NAAS, Machine Learning, Key Pairs, AWS CloudFormation, Amazon Macie, Amazon Textract, Glacier Deep Archive, 99.999999999% durability, AWS Codestar, Amazon Neptune, S3 Bucket, EMR, SNS, Desktop As A Service, Emazon EC2 for Mac, Aurora Postgres SQL, Kubernetes, Containers, Cluster.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Azure: Virtual Machines, Azure App Services, Azure Container Instances (ACI), Azure Kubernetes Service (AKS), and Windows Virtual Desktop, Virtual Networks, VPN Gateway, Virtual Network peering, and ExpressRoute, Container (Blob) Storage, Disk Storage, File Storage, and storage tiers, Cosmos DB, Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, and SQL Managed Instance, Azure Marketplace, Azure consumption-based mode, management groups, resources and RG, Geographic distribution concepts such as Azure regions, region pairs, and AZ Internet of Things (IoT) Hub, IoT Central, and Azure Sphere, Azure Synapse Analytics, HDInsight, and Azure Databricks, Azure Machine Learning, Cognitive Services and Azure Bot Service, Serverless computing solutions that include Azure Functions and Logic Apps, Azure DevOps, GitHub, GitHub Actions, and Azure DevTest Labs, Azure Mobile, Azure Advisor, Azure Resource Manager (ARM) templates, Azure Security, Privacy and Workloads, General security and network security, Azure security features, Azure Security Centre, policy compliance, security alerts, secure score, and resource hygiene, Key Vault, Azure Sentinel, Azure Dedicated Hosts, Concept of defense in depth, NSG, Azure Firewall, Azure DDoS protection, Identity, governance, Conditional Access, Multi-Factor Authentication (MFA), and Single Sign-On (SSO),Azure Services, Core Azure architectural components, Management Groups, Azure Resource Manager,

Google Cloud Platform: Compute Engine, App Engine, BigQuery, Bigtable, Pub/Sub, flow logs, CORS, CLI, pod, Firebase, Cloud Run, Cloud Firestore, Cloud CDN, Cloud Storage, Persistent Disk, Kubernetes engine, Container registry, Cloud Load Balancing, Cloud Dataflow, gsutils, Cloud SQL,

2022 AWS Cloud Practitioner Exam Preparation

Cloud Education Certification: Eduflix App for Cloud Education and Certification (AWS, Azure, Google Cloud) [Android, iOS]

Features:
– Practice exams
– 1000+ Q&A updated frequently.
– 3+ Practice exams per Certification
– Scorecard / Scoreboard to track your progress
– Quizzes with score tracking, progress bar, countdown timer.
– Can only see scoreboard after completing the quiz.
– FAQs for most popular Cloud services
– Cheat Sheets
– Flashcards
– works offline

Note and disclaimer: We are not affiliated with AWS, Azure, Microsoft or Google. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

Azure Administrator AZ-104 Exam Questions and Answers Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

GCP, Google Cloud Platform, has been a game changer in the tech industry. It allows organizations to build and run applications on Google’s infrastructure. The GCP platform is trusted by many companies because it is reliable, secure and scalable. In order to become a GCP certified professional, one must pass the GCP Professional Architect exam. The GCP Professional Architect exam is not easy, but with the right practice questions and answers dumps, you can pass the GCP PA exam with flying colors.

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761

The Google Certified Cloud Professional Architect Exam assesses your ability to:


  • Design and plan a cloud solution architecture
  • Manage and provision the cloud solution infrastructure
  • Design for security and compliance
  • Analyze and optimize technical and business processes
  • Manage implementations of cloud architecture
  • Ensure solution and operations reliability
  • Designing and planning a cloud solution architecture

The Google Certified Cloud Professional Architect covers the following topics:

Designing and planning a cloud solution architecture: 36%

This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Managing and provisioning a solution Infrastructure: 20%

This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.

Designing for security and compliance: 12%

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.

Managing implementation: 10%

This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).

Ensuring solution and operations reliability: 6%

This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.

Analyzing and optimizing technical and business processes: 16%

This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.

Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps that will help you ace the GCP Professional Architect exam:

You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C

Question 1:  Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?

 A. Have the vehicles in the field stream the data directly into BigQuery.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.

C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.

D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.

ANSWER1:

D

Notes/References1:

D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.

Reference: Streaming insertsApache Hadoop and Spark10 tips for building long running cluster using cloud dataproc

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary - $175,761

Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?

A. Execute queries against data stored in a Cloud SQL.

B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

C. Execute queries against data stored on daily partitioned BigQuery tables.

D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.

ANSWER2:

B

Notes/References2:

B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.

Reference: BigTables time series clusterBigQuery

Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?

A. Use multiple connectivity subsystems for redundancy. 

B. Require IPv6 for connectivity to ensure a secure address space. 

C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.

D. Use a functional programming language to isolate code execution cycles.

E. Treat every microservice call between modules on the vehicle as untrusted.

F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.

ANSWER3:

E and F

Notes/References3:

E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.

F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.

Reference 3: Trusted Platform Module

Question 4: For this question, refer to the Company A case study.

Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

A. OpEx/CapEx allocation, LAN change management, capacity planning

B. Capacity planning, TCO calculations, OpEx/CapEx allocation 

C. Capacity planning, utilization measurement, data center expansion

D. Data center expansion, TCO calculations, utilization measurement

ANSWER4:

B

Notes/References4:

B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.

Reference: Cloud Economics

[appbox appstore 1574395172-iphone screenshots]
[appbox googleplay com.gcpacepro.enoumen]
[appbox appstore 1560083470-iphone screenshots]
[appbox googleplay com.coludeducation.quiz]

Question 5: For this question, refer to the Company A case study.

You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?

A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

ANSWER5:

C

Notes/References5:

C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.

Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?

A. RPM/DEB

B. Containers 

C. Chef/Puppet

D. Virtual machines

ANSWER6:

B

Notes/References6:

B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.

Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?

A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table. 

C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.

ANSWER7:

C

Notes/References7:

C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.

Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?

A. Load logs into BigQuery. 

B. Load logs into Cloud SQL.

C. Import logs into Stackdriver. 

D. Insert logs into Cloud Bigtable.

E. Upload log files into Cloud Storage.

ANSWER8:

A and E

Notes/References8:

A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.

E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.

References: BigQueryStackDriverBigTableStorage Class: ColdLine

Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. 

B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.

C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

ANSWER9:

C

Notes/References9:

C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.

Reference: Load balancingLoad Balancing Health Checking

Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

A. Add each tier to a different subnetwork.

B. Set up software-based firewalls on individual VMs. 

C. Add tags to each tier and set up routes to allow the desired traffic flow.

D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

ANSWER10:

D

Notes/References10:

D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.

Reference: Using VPCRoutesAdd Remove Network

Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?

A. Use gsutil.

B. Use gcloud.

C. Use GCS REST API. 

D. Use Storage Transfer Service.

ANSWER11:

A

Notes/References11:

A is correct because gsutil gives you access to write data to Cloud Storage.

Reference: gsutilsgcloud sdkcloud storage json apiuploading objectsstorage transfer

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary - $175,761
https://apps.apple.com/ca/app/djamgatech-pro/id1574297762

Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?

A. Encrypt the message client-side using block-based encryption with a shared key.

B. Tag messages client-side with the originating user identifier and the destination user.

C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server. 

D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.

ANSWER12:

D

Notes/References12:

D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.

Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?

A. In the source code

B. In an environment variable 

C. In a key management system

D. In a config file that has restricted access through ACLs

ANSWER13:

C

Notes/References13:

Question 14: For this question, refer to the Company B case study.

Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL

B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery 

C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

ANSWER14:

B

Notes/References14:

B is correct because:
Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers.
Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices.
Cloud Pub/Sub can ingest the streaming data from the mobile users.
BigQuery can query more than 10 TB of historical data.

References: GCP QuotasBeam Apache WindowingBeam Apache TriggersBigQuery External Data SolutionsApache Hive on Cloud Dataproc

Question 15: For this question, refer to the Company B case study.

Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.

ANSWER15:

A

Notes/References15:

A is correct because simulating production load in GCP can scale in an economical way.

Reference: Load Testing iot using gcp and locustDistributed Load Testing Using Kubernetes

Question 16: For this question, refer to the Company B case study.

Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:

  • Services are deployed redundantly across multiple regions in the US and Europe
  • Only frontend services are exposed on the public internet.
  • They can reserve a single frontend IP for their fleet of services.
  • Deployment artifacts are immutable

Which set of products should they use?

A. Cloud Storage, Cloud Dataflow, Compute Engine

B. Cloud Storage, App Engine, Cloud Load Balancing

C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing

D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager

ANSWER16:

C

Notes/References16:

C is correct because:
Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers.
Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose.
Container Registry is a single place for a team to manage Docker images for the services.

References: Load Balancing https – load balancing overview GCP lb global forwarding rulesreserve static external ip addressbest practice for operating containerscontainer registrydataflowcalling https

Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

A. Org viewer, Project owner

B. Org viewer, Project viewer 

C. Org admin, Project browser

D. Project owner, Network admin

ANSWER17:

B

Notes/References17:

B is correct because:
Org viewer grants the security team permissions to view the organization’s display name.
Project viewer grants the security team permissions to see the resources within projects.

Reference: GCP Resource Manager – User Roles

Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?

A. Use persistent disks to store the state. Start and stop the VM as needed. 

B. Use the –auto-delete flag on all persistent disks before stopping the VM. 

C. Apply VM CPU utilization label and include it in the BigQuery billing export.

D. Use BigQuery billing export and labels to relate cost to groups. 

E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.

ANSWER18:

A and D

Notes/References18:

A is correct because persistent disks will not be deleted when an instance is stopped.

D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.

References: GCP instances life cycleGCP instances set disk auto deleteGCP Local Data PersistanceGCP export data BigQueryGCP Creating Managing Labels

Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?

A. Configure a new load balancer for the new version of the API.

B. Reconfigure old clients to use a new endpoint for the new API. 

C. Have the old API forward traffic to the new API based on the path.

D. Use separate backend services for each API path behind the load balancer.

ANSWER19:

D

Notes/References19:

D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.

References: load balancing httpsload balancing backendGCP lb global forwarding rules

Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?

A. Increase the virtual machine’s memory to 64 GB.

B. Create a new virtual machine running PostgreSQL. 

C. Dynamically resize the SSD persistent disk to 500 GB.

D. Migrate their performance metrics warehouse to BigQuery.

ANSWER20:

C

Notes/References20:

C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.

References: GCP compute disks pdsspecsGCP Compute Disks Performances

Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Google’s Cloud CDN.

B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

C. Do nothing.

D. Use global BigTable storage.

E. Use a global Cloud Spanner instance.

F. Migrate the data to a new multi-regional GCS bucket.

G. Change the storage class to multi-regional.

ANSWER21:

A

Notes/References21:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?

A. Python, Flask, App Engine Standard

B. Ruby, Nginx, GKE

C. HTML, CSS, Cloud Storage

D. Node.js, Express, Cloud Functions

E. Rust, Rocket, App Engine Flex

F. Perl, CGI, GCE

ANSWER22:

A

Notes/References22:

The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too. 

Reference: Building a Python 3.7 App on App Engine

Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?

A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.

C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER23:

C

Notes/References23:

Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

References: Cloud Solutions Architecture Reference

Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?

A. Pause all Cloud Functions via the UI and unpause them when work starts back up.

B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.

C. Delete all Cloud Functions and recreate them when work starts back up.

D. Convert all Cloud Functions to run as App Engine Standard applications during the break.

E. None of these options is a good suggestion.

ANSWER24:

E

Notes/References24:

Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.

Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Cloud Filestore

D. Persistent Disk

ANSWER25:

C

Notes/References25:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question. 

Reference: Cloud Filestore | Google Cloud

Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?

A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER26:

A

Notes/References26:

The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn’t also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

Reference: Cloud Solutions Architecture Reference

Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?

A. Cloud Storage

B. Logstash

C. Cloud Monitoring

D. Cloud Logging

E. BigQuery

F. BigTable

ANSWER27:

D

Notes/References27:

Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on. 

Reference: Cloud Logging | Google Cloud

Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?

A. Compute Engine

B. Cloud Filestore

C. Cloud Storage

D. Persistent Disk

E. Container Registry

F. Cloud Source Repositories

G. Cloud Build

H. Nearline

ANSWER28:

C

Notes/References28:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer. 

Reference: The App Engine Standard Environment Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?

A. Use a global Cloud Spanner instance.

B. Change the storage class to multi-regional.

C. Use Google’s Cloud CDN.

D. Migrate the data to a new regional GCS bucket.

E. Do nothing.

F. Use global BigTable storage.

ANSWER29:

E

Notes/References29:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

B. Use Google’s Cloud CDN.

C. Use global BigTable storage.

D. Do nothing.

E. Migrate the data to a new multi-regional GCS bucket.

F. Use a global Cloud Spanner instance.

ANSWER30:

E

Notes/References30:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?

A. Six

B. One

C. Three

D. Four

E. Two

F. Nine

ANSWER31:

A

Notes/References31:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Persistent Disk

D. Nearline

E. Cloud Source Repositories

F. Cloud Build

G. Cloud Filestore

H. Compute Engine

ANSWER32:

F

Notes/References32:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?

A. Two

B. One

C. Three

D. Nine

E. Four

F. Six

ANSWER33:

D

Notes/References33:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?

A. Cloud Filestore

B. Coldline

C. Nearline

D. Persistent Disk

E. Cloud Source Repositories

F. Cloud Storage

G. Container Registry

ANSWER34:

B

Notes/References34:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline. 

Reference: Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Filestore

C. Cloud Source Repositories

D. Persistent Disk

E. Cloud Storage

F. Code Build

G. Nearline

ANSWER35:

A

Notes/References35:

Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?

A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.

B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.

C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.

D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.

E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.

ANSWER36:

C

Notes/References36:

You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor’s own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you’ll need to grant securityAdmin, not networkViewer. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions Understanding roles | Cloud IAM Documentation | Google Cloud

Question 37: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?

A. One

B. Six

C. Four

D. Three

E. Nine

ANSWER37:

F

Notes/References37:

A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 38: You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?

A. Binary Authorization

B. Cloud IAM

C. Security Key Enforcement

D. Cloud SCC

E. Cloud KMS

ANSWER38:

A

Notes/References38:

Cloud KMS is Google’s product for managing encryption keys. Security Key Enforcement is about making sure that people’s accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments. 

Reference: Cloud Key Management Service | Google Cloud Cloud IAM | Google Cloud Cloud Data Loss Prevention | Google Cloud Security Command Center | Google Cloud Binary Authorization | Google Cloud Security Key Enforcement – 2FA

Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?

A. PCI

B. PII

C. SOX

D. GDPR

E. HIPAA

ANSWER39:

B and D

Notes/References39:

There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”. 

Reference: Sarbanes–Oxley Act – Wikipedia Payment Card Industry Data Security Standard – Wikipedia Personal data – Wikipedia Personal data – Wikipedia

Question 40: Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?

A. Their customers located in the EU may require them to delete their user data and provide evidence of such.

B. They will also need to pass a SOX audit.

C. They handle money-linked information.

D. Their system deals with medical information.

ANSWER40:

D

Notes/References40:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 41: Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?

A. They handle money-linked information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. Their system deals with medical information.

D. They will also need to pass a SOX audit.

ANSWER42:

A

Notes/References42:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much. 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 43: Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?

A. Their system deals with medical information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. They will also need to pass a SOX audit.

D. They handle money-linked information.

ANSWER43:

B

Notes/References43:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 44: For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?

A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud

B. Lift and shift all servers at one time

C. Lift and shift one application at a time

D. Lift and shift one server at a time

E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system

F. Enact their disaster recovery plan and fail over

ANSWER44:

F

Notes/References44:

The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time. 

Reference: Strangler pattern – Cloud Design Patterns | Microsoft Docs StranglerFigApplication Monolith to Microservices Using the Strangler Pattern – DZone Microservices Understanding Lift and Shift and If It’s Right For You

Question 45: Which of the following commands is most likely to appear in an environment setup script?

A. gsutil mb -l asia gs://${project_id}-logs

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute instances create –zone–machine-type=f1-micro newvm

D. gcloud compute ssh ${instance_id}

E. gsutil cp -r gs://${project_id}-setup ./install

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

ANSWER45:

A

Notes/References45:

The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You’re not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances. 

Reference: mb – Make buckets | Cloud Storage | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 46: Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?

A. /dealerLocations/get

B. /dealerLocations

C. /dealerLocations/list

D. Source and destination

E. /getDealerLocations

ANSWER46:

B

Notes/References46:

It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it’s important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint. 

Reference: RESTful API Design — Step By Step Guide – By

Question 47: Which of the following commands is most likely to appear in an instance shutdown script?

A. gsutil cp -r gs://${project_id}-setup ./install

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute ssh ${instance_id}

D. gsutil mb -l asia gs://${project_id}-logs

E. gcloud compute instances delete ${instance_id}

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

G. gcloud compute instances create –zone–machine-type=f1-micro newvm

ANSWER47:

F

Notes/References47:

The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you’d want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)

Reference:  Running startup scripts | Compute Engine Documentation | Google Cloud Running shutdown scripts | Compute Engine Documentation | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 48: It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?

A. Escalate the decision to the business manager responsible for the SLA

B. Take the system offline

C. Revert the system to the state it was in on Friday morning

D. Investigate the cause of the issue

ANSWER48:

B

Notes/References48:

The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn’t make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we’d better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO. 

Reference: Disaster recovery – Wikipedia

Question 49: Which of the following are not processes or practices that you would associate with DevOps?

A. Raven-test the candidate

B. Obfuscate the code

C. Only one of the other options is made up

D. Run the code in your cardinal environment

E. Do a canary deploy

ANSWER49:

A and D

Notes/References49:

Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you’re making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization’s source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn’t a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.

Reference: Intro to deployment strategies: blue-green, canary, and more – DEV Community ‍‍

Question 50: Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?

A. Object lifecycle management

B. BigQuery Slots

C. Committed use discounts

D. Sustained use discounts

E. Managed instance group autoscaling

F. Pub/Sub topic centralization

ANSWER50:

B and C

Notes/References50:

Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn’t really a thing). Sustained use discounts can reduce costs, but that’s not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE’s Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don’t use them; you’ve already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won’t pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex. 

Reference: CapEx vs OpEx: Capital Expenses and Operating Expenses Explained – BMC Blogs Sustained use discounts | Compute Engine Documentation | Google Cloud Committed use discounts | Compute Engine Documentation | Google Cloud Slots | BigQuery | Google Cloud Autoscaling groups of instances | Compute Engine Documentation Object Lifecycle Management | Cloud Storage | Google Cloud

Question 51: In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?

A. The scrum master is the one who decides

B. The lead architect should get the final say

C. The product owner should get the final say

D. You should put it to a vote of key stakeholders

E. You should put it to a vote of all stakeholders

ANSWER51:

C

Notes/References51:

In Scrum, it is the Product Owner’s role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven’t ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context. 

Reference: Scrum Guide | Scrum Guides

Question 52: Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?

A. Split testing

B. Red-Black

C. A/B

D. Canary

E. Rolling

F. Blue-Green

G. Flex downtime

ANSWER52:

D and E

Notes/References52:

A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application. 

Reference: BlueGreenDeployment design patterns – What’s the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow design patterns – What’s the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow What is rolling deployment? – Definition from WhatIs.com A/B testing – Wikipedia

Question 53: You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?

A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on

B. Hiring and retaining 10X developers is critical to project success

C. A key goal of a proper post-mortem is to identify what processes need to be changed

D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project

E. A key goal of a proper post-mortem is to determine who needs additional training

ANSWER53:

A and C

Notes/References53:

There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future. 

Reference: 403 Forbidden 403 Forbidden Google – Site Reliability Engineering The Cone of Uncertainty

Question 54: Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. The next month’s SLO will be reduced.

C. Your client(s) will have to pay you extra.

D. You will have to pay your client(s).

E. There is no impact on payments.

F. There is not enough information to make a determination.

ANSWER54:

D

Notes/References54:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What’s the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 55: Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. There is no impact on payments.

C. There is not enough information to make a determination.

D. Your client(s) will have to pay you extra.

E. The next month’s SLO will be reduced.

F. You will have to pay your client(s).

ANSWER55:

B

Notes/References55:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What’s the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 56: For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?

A. Configure the autoscaling thresholds to follow changing load

B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand

C. Run cron jobs on their application servers to scale down at night and up in the morning

D. Use Cloud Load Balancing to balance the traffic highs and lows

D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning

E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace

ANSWER56:

A

Notes/References56:

The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over. 

Reference: Load Balancing | Google Cloud Google Cloud Platform Marketplace Solutions Cloud Functions | Google Cloud Cloud Scheduler | Google Cloud

Google Cloud Latest News, Questions and Answers online:

Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.

With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.

With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem! Cloud Run is made for Devops engineers, App Engine is made for developers. Read more here…

Cloud Run VS Cloud Functions: What to consider?

The best choice depends on what you want to optimize, your use-cases and your specific needs.

If your objective is the lowest latency, choose Cloud Run.

Indeed, Cloud Run use always 1 vCPU (at least 2.4Ghz) and you can choose the memory size from 128Mb to 2Gb.

With Cloud Functions, if you want the best processing performance (2.4Ghz of CPU), you have to pay 2Gb of memory. If your memory footprint is low, a Cloud Functions with 2Gb of memory is overkill and cost expensive for nothing.

Cutting cost is not always the best strategy for customer satisfaction, but business reality may require it. Anyway, it highly depends of your use-case

Both Cloud Run and Cloud Function round up to the nearest 100ms. As you could play with the GSheet, the Cloud Functions are cheaper when the processing time of 1 request is below the first 100ms. Indeed, you can slow the Cloud Functions vCPU, with has for consequence to increase the duration of the processing but while staying under 100ms if you tune it well. Thus less Ghz/s are used and thereby you pay less.

the cost comparison between Cloud Functions and Cloud Run goes further than simply comparing a pricing list. Moreover, on your projects, you often will have to use the 2 solutions for taking advantage of their strengths and capabilities.

My first choice for development is Cloud Run. Its portability, its testability, its openess on the libraries, the languages and the binaries confer it too much advantages for, at least, a similar pricing, and often with a real advantage in cost but also in performance, in particular for concurrent requests. Even if you need the same level of isolation of Cloud functions (1 instance per request), simply set the concurrent param to 1!

In addition, the GA of Cloud Run is applied on all containers, whatever the languages and the binaries used. Read more here…

What does the launch of Google’s App Maker mean for professional app developers?

Should I go with AWS Elastic Beanstalk or Google App Engine (Managed VMs) for deploying my Parse-Server backend?

Why can a company such as Google sell me a cloud gaming service where I can “rent” GPU power over miles of internet, but when I seek out information on how to create a version of this everyone says that it is not possible or has too much latency?

AWS wins hearts of developers while Azure those of C-levels. Google is a black horse with special expertise like K8s and ML. The cloud world is evolving. Who is the winner in the next 5 years?

What is GCP (Google Cloud Platform) and how does it work?

What is the maximum amount of storage that you could have in your Google drive?

How do I deploy Spring Boot application (Web MVC) on Google App Engine(GAE) or HEROKU using Eclipse IDE?

What are some downsides of building softwares on top of Google App Engine?

Why is Google losing the cloud computing race?

How did new products like Google Drive, Microsoft SkyDrive, Yandex.Disk and other cloud storage solutions affect Dropbox’s growth and revenues?

What is the capacity of Google servers?

What is the Hybrid Cloud platform?

What is the difference between Docker and Google App engines?

How do I get to cloud storage?

How does Google App Engine compare to Heroku?

What is equivalent of Google Cloud BigTable in Microsoft Azure?

How big is the storage capacity of Google organization and who comes second?

It seems strange that Google Cloud Platform offer “everything” except cloud search/inverted index?

Where are the files on Google Drive stored?

Is Google app engine similar to lambda?

Was Diane Greene a failure as the CEO of Google Cloud considering her replacement’s strategy and philosophy is the polar opposite?

How is Google Cloud for heavy real-time traffic? Is there any optimization needed for handling more than 100k RT?

When it comes to iCloud, why does Apple rely on Google Cloud instead of using their own data centers?

Google Cloud Storage : What bucket class for the best performance?: Multiregional buckets perform significantly better for cross-the-ocean fetches, however the details are a bit more nuanced than that. The performance is dominated by the latency of physical distance between the client and the cloud storage bucket.

  • If caching is on, and your access volume is high enough to take advantage of caching, there’s not a huge difference between the two offerings (that I can see with the tests). This shows off the power of Google’s Awesome CDN environment.
  • If caching is off, or the access volume is low enough that you can’t take advantage of caching, then the performance overhead is dominated directly by physics. You should be trying to get the assets as close to the clients as possible, while also considering cost, and the types of redundancy and consistency you’ll need for your data needs.

Conclusion:

GCP, or the Google Cloud Platform, is a cloud-computing platform that provides users with access to a variety of GCP services. The GCP Professional Architect Engineeer exam is designed to test a candidate’s ability to design, implement, and manage GCP solutions. The GCP questions cover a wide range of topics, from basic GCP concepts to advanced GCP features. To become a GCP Certified Professional, you must pass the GCP PE exam. Below are some basics GCP Questions to answer to get yourself familiarized with the Google Cloud Platform:

1) What is GCP?
2) What are the benefits of using GCP?
3) How can GCP help my business?
4) What are some of the features of GCP?
5) How is GCP different from other clouds?
6) Why should I use GCP?
7) What are some of GCP’s strengths?
8) How is GCP priced?
9) Is GCP easy to use?
10) Can I use GCP for my personal projects?
11) What services does GCP offer?
12) What can I do with GCP?
13) What languages does GCP support?
14) What platforms does GCP support?
15) Does GPC support hybrid deployments? 16) Does GPC support on-premises deployments?

17) Is there a free tier on GPC ?

18) How do I get started with usingG CP ?

Top- high paying certifications:

  1. Google Certified Professional Cloud Architect – $139,529
  2. PMP® – Project Management Professional – $135,798
  3. Certified ScrumMaster® – $135,441
  4. AWS Certified Solutions Architect – Associate – $132,840
  5. AWS Certified Developer – Associate – $130,369
  6. Microsoft Certified Solutions Expert (MCSE): Server Infrastructure – $121,288
  7. ITIL® Foundation – $120,566
  8. CISM – Certified Information Security Manager – $118,412
  9. CRISC – Certified in Risk and Information Systems Control – $117,395
  10. CISSP – Certified Information Systems Security Professional – $116,900
  11. CEH – Certified Ethical Hacker – $116,306
  12. Citrix Certified Associate – Virtualization (CCA-V) – $113,442
  13. CompTIA Security+ – $110,321
  14. CompTIA Network+ – $107,143
  15. Cisco Certified Networking Professional (CCNP) Routing and Switching – $106,957

According to the 2020 Global Knowledge report, the top-paying cloud certifications for the year are (drumroll, please):

1- Google Certified Professional Cloud Architect — $175,761

2- AWS Certified Solutions Architect – Associate — $149,446

3- AWS Certified Cloud Practitioner — $131,465

4- Microsoft Certified: Azure Fundamentals — $126,653

5- Microsoft Certified: Azure Administrator Associate — $125,993

Sources:

1- Google Cloud

2- Linux Academy

3- WhizLabs

4- GCP Space on Quora

5- Udemy

6- Acloud Guru

7. Question and Answers are sent to us by good people all over the world.

First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.

I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.

In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.

As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.

I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.

One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.

I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.

All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.

What are the corresponding Azure and Google Cloud services for each of the AWS services?

Azure Administrator AZ-104 Exam Questions and Answers Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are the corresponding Azure and Google Cloud services for each of the AWS services?

What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?

For a better experience, use the mobile app here.

AWS vs Azure vs Google
What are the corresponding  Azure and Google Cloud services for each of the AWS services?
AWS vs Azure vs Google Mobile App
Cloud Practitioner Exam Prep:  AWS vs Azure vs Google
Cloud Practitioner Exam Prep: AWS vs Azure vs Google

1

Category: Marketplace
Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions.
References:
[AWS]:AWS Marketplace
[Azure]:Azure Marketplace
[Google]:Google Cloud Marketplace
Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace
Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.


3


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Category: AI and machine learning
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
References:
[AWS]:Alexa Skills Kit (enables a developer to build skills, also called conversational applications, on the Amazon Alexa artificial intelligence assistant.)
[Azure]:Microsoft Bot Framework (building enterprise-grade conversational AI experiences.)
[Google]:Google Assistant Actions ( developer platform that lets you create software to extend the functionality of the Google Assistant, Google’s virtual personal assistant,)

Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant
Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.

4

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Category: AI and machine learning
Description:API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
References:
[AWS]:Amazon Lex (building conversational interfaces into any application using voice and text.)
[Azure]:Azure Speech Services(unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription)
[Google]:Google APi.ai, AI Hub (Hosted repo of plug-and-play AI component), AI building blocks(for developers to add sight, language, conversation, and structured data to their applications.), AI Platform(code-based data science development environment, lets ML developers and data scientists quickly take projects from ideation to deployment.), DialogFlow (Google-owned developer of human–computer interaction technologies based on natural language conversations. ), TensorFlow(Open Source Machine Learning platform)

Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow
Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).

5

Category: AI and machine learning
Description:Computer Vision: Extract information from images to categorize and process visual data.
References:
[AWS]:Amazon Rekognition (based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use.)
[Azure]:Cognitive Services(bring AI within reach of every developer—without requiring machine-learning expertise.)
[Google]:Google Vision (offers powerful pre-trained machine learning models through REST and RPC APIs.)
Tags: AmazonRekognition, #GoogleVision, #AzureSpeech
Differences: For now, only Google Cloud Vision supports batch processing. Videos are not natively supported by Google Cloud Vision or Amazon Rekognition. The Object Detection functionality of Google Cloud Vision and Amazon Rekognition is almost identical, both syntactically and semantically.
Differences:
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs.

6

Category: Big data and analytics: Data warehouse
Description:Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
References:
[AWS]:AWS Redshift (scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake.), Amazon Redshift Data Lake Export (Save query results in an open format),Amazon Redshift Federated Query(Run queries n line transactional data), Amazon Redshift RA3(Optimize costs with up to 3x better performance), AQUA: AQUA: Advanced Query Accelerator for Amazon Redshift (Power analytics with a new hardware-accelerated cache), UltraWarm for Amazon Elasticsearch Service(Store logs at ~1/10th the cost of existing storage tiers )
[Azure]:Azure Synapse formerly SQL Data Warehouse (limitless analytics service that brings together enterprise data warehousing and Big Data analytics.)
[Google]:BigQuery (RESTful web service that enables interactive analysis of massive datasets working in conjunction with Google Storage. )
Tags:#AWSRedshift, #GoogleBigQuery, #AzureSynapseAnalytics
Differences: Loading data, Managing resources (and hence pricing), Ecosystem. Ecosystem is where Redshift is clearly ahead of BigQuery. While BigQuery is an affordable, performant alternative to Redshift, they are considered to be more up and coming

7

Category: Big data and analytics: Data warehouse
Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization
References:
[AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch
[Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning
[Google]:Cloud DataProc, Machine Learning, Cloud Datalab
Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab
Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.

8

Category: Virtual servers
Description:Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Batch: Run large-scale parallel and high-performance computing applications efficiently in the cloud.
References:
[AWS]:Elastic Compute Cloud (EC2), Amazon Bracket(Explore and experiment with quantum computing), Amazon Ec2 M6g Instances (Achieve up to 40% better price performance), Amazon Ec2 Inf1 instancs (Deliver cost-effective ML inference), AWS Graviton2 Processors (Optimize price performance for cloud workloads), AWS Batch, AWS AutoScaling, VMware Cloud on AWS, AWS Local Zones (Run low latency applications at the edge), AWS Wavelength (Deliver ultra-low latency applications for 5G devices), AWS Nitro Enclaves (Further protect highly sensitive data), AWS Outposts (Run AWS infrastructure and services on-premises)
[Azure]:Azure Virtual Machines, Azure Batch, Virtual Machine Scale Sets, Azure VMware by CloudSimple
[Google]:Compute Engine, Preemptible Virtual Machines, Managed instance groups (MIGs), Google Cloud VMware Solution by CloudSimple
Tags: #AWSEC2, #AWSBatch, #AWSAutoscaling, #AzureVirtualMachine, #AzureBatch, #VirtualMachineScaleSets, #AzureVMWare, #ComputeEngine, #MIGS, #VMWare
Differences: There is very little to choose between the 3 providers when it comes to virtual servers. Amazon has some impressive high end kit, on the face of it this sound like it would make AWS a clear winner. However, if your only option is to choose the biggest box available you will need to make sure you have very deep pockets, and perhaps your money may be better spent re-architecting your apps for horizontal scale.Azure’s remains very strong in the PaaS space and now has a IaaS that can genuinely compete with AWS
Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.

9

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Category: Containers and container orchestrators
Description: A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments.
References:
[AWS]:EC2 Container Service (ECS), Fargate(Run containers without anaging servers or clusters), EC2 Container Registry(managed AWS Docker registry service that is secure, scalable, and reliable.), Elastic Container Service for Kubernetes (EKS: runs the Kubernetes management infrastructure across multiple AWS Availability Zones), App Mesh( application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure)
[Azure]:Azure Container Instances, Azure Container Registry, Azure Kubernetes Service (AKS), Service Fabric Mesh
[Google]:Google Container Engine, Container Registry, Kubernetes Engine
Tags:#ECS, #Fargate, #EKS, #AppMesh, #ContainerEngine, #ContainerRegistry, #AKS
Differences: Google Container Engine, AWS Container Services, and Azure Container Instances can be used to run docker containers. Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.

10

Category: Serverless
Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
References:
[AWS]:AWS Lambda
[Azure]:Azure Functions
[Google]:Google Cloud Functions
Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions
Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.

11

Category: Relational databases
Description: Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
References:
[AWS]:AWS RDS(MySQL and PostgreSQL-compatible relational database built for the cloud,), Aurora(MySQL and PostgreSQL-compatible relational database built for the cloud)
[Azure]:SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL
[Google]:Cloud SQL
Tags: #AWSRDS, #AWSAUrora, #AzureSQlDatabase, #AzureDatabaseforMySQL, #GoogleCloudSQL
Differences: All three providers boast impressive relational database offering. RDS supports an impressive range of managed relational stores while Azure SQL Database is probably the most advanced managed relational database available today. Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.

12

Category: NoSQL, Document Databases
Description:A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
References:
[AWS]:DynamoDB (key-value and document database that delivers single-digit millisecond performance at any scale.), SimpleDB ( a simple web services interface to create and store multiple data sets, query your data easily, and return the results.), Managed Cassandra Services(MCS)
[Azure]:Table Storage, DocumentDB, Azure Cosmos DB
[Google]:Cloud Datastore (handles sharding and replication in order to provide you with a highly available and consistent database. )
Tags:#AWSDynamoDB, #SimpleDB, #TableSTorage, #DocumentDB, AzureCosmosDB, #GoogleCloudDataStore
Differences:DynamoDB and Cloud Datastore are based on the document store database model and are therefore similar in nature to open-source solutions MongoDB and CouchDB. In other words, each database is fundamentally a key-value store. With more workloads moving to the cloud the need for NoSQL databases will become ever more important, and again all providers have a good range of options to satisfy most performance/cost requirements. Of all the NoSQL products on offer it’s hard not to be impressed by DocumentDB; Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.

13

Category:Caching
Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
References:
[AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.)
[Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.)
[Google]:Memcache (In-memory key-value store, originally intended for caching)
Tags:#Redis, #Memcached
<Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.

14

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Category: Security, identity, and access
Description:Authentication and authorization: Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
References:
[AWS]:Identity and Access Management (IAM), AWS Organizations, Multi-Factor Authentication, AWS Directory Service, Cognito(provides solutions to control access to backend resources from your app), Amazon Detective (Investigate potential security issues), AWS IAM Access Analyzer(Easily analyze resource accessibility)
[Azure]:Azure Active Directory, Azure Subscription Management + Azure RBAC, Multi-Factor Authentication, Azure Active Directory Domain Services, Azure Active Directory B2C, Azure Policy, Management Groups
[Google]:Cloud Identity, Identity Platform, Cloud IAM, Policy Intelligence, Cloud Resource Manager, Cloud Identity-Aware Proxy, Context-aware accessManaged Service for Microsoft Active Directory, Security key enforcement, Titan Security Key
Tags: #IAM, #AWSIAM, #AzureIAM, #GoogleIAM, #Multi-factorAuthentication
Differences: One unique thing about AWS IAM is that accounts created in the organization (not through federation) can only be used within that organization. This contrasts with Google and Microsoft. On the good side, every organization is self-contained. On the bad side, users can end up with multiple sets of credentials they need to manage to access different organizations. The second unique element is that every user can have a non-interactive account by creating and using access keys, an interactive account by enabling console access, or both. (Side note: To use the CLI, you need to have access keys generated.)

15

Category: Object Storage and Content delivery
Description:Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export(used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport.), Snowball( petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud), CloudFront( content delivery network (CDN) is massively scaled and globally distributed), Elastic Block Store (EBS: high performance block storage service), Elastic File System(shared, elastic file storage system that grows and shrinks as you add and remove files.), S3 Infrequent Access (IA: is for data that is accessed less frequently, but requires rapid access when needed. ), S3 Glacier( long-term storage of data that is infrequently accessed and for which retrieval latency times of 3 to 5 hours are acceptable.), AWS Backup( makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.), Storage Gateway(hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage), AWS Import/Export Disk(accelerates moving large amounts of data into and out of AWS using portable storage devices for transport)
[Azure]:
Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:
Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:
Source: All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.

16

Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT (Internet of Things), AWS Greengrass, Kinesis Firehose, Kinesis Streams, AWS IoT Things Graph
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.

17

Category:Web Applications
Description:Managed hosting platform providing easy to use services for deploying and scaling web applications and services. API Gateway is a a turnkey solution for publishing APIs to external and internal consumers. Cloudfront is a global content delivery network that delivers audio, video, applications, images, and other files.
References:
[AWS]:Elastic Beanstalk (for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS), AWS Wavelength (for delivering ultra-low latency applications for 5G), API Gateway (makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.), CloudFront (web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations.),Global Accelerator ( improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.)AWS AppSync (simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources: GraphQL service with real-time data synchronization and offline programming features. )
[Azure]:App Service, API Management, Azure Content Delivery Network, Azure Content Delivery Network
[Google]:App Engine, Cloud API, Cloud Enpoint, APIGee
Tags: #AWSElasticBeanstalk, #AzureAppService, #GoogleAppEngine, #CloudEnpoint, #CloudFront, #APIgee
Differences: With AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using Elastic Beanstalk’s management capabilities. AWS Elastic Beanstalk integrates with more apps than Google App Engines (Datadog, Jenkins, Docker, Slack, Github, Eclipse, etc..). Google App Engine has more features than AWS Elastic BEanstalk (App Identity, Java runtime, Datastore, Blobstore, Images, Go Runtime, etc..). Developers describe Amazon API Gateway as “Create, publish, maintain, monitor, and secure APIs at any scale”. Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. On the other hand, Google Cloud Endpoints is detailed as “Develop, deploy and manage APIs on any Google Cloud backend”. An NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API development and provides insight with Google Cloud Monitoring, Cloud Trace, Google Cloud Logging and Cloud Trace.

18

Category:Encryption
Description:Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
References:
[AWS]:Key Management Service AWS KMS, CloudHSM
[Azure]:Key Vault
[Google]:Encryption By Default at Rest, Cloud KMS
Tags:#AWSKMS, #Encryption, #CloudHSM, #EncryptionAtRest, #CloudKMS
Differences: AWS KMS, is an ideal solution for organizations that want to manage encryption keys in conjunction with other AWS services. In contrast to AWS CloudHSM, AWS KMS provides a complete set of tools to manage encryption keys, develop applications and integrate with other AWS services. Google and Azure offer 4096 RSA. AWS and Google offer 256 bit AES. With AWs, you can bring your own key

19

Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT, AWS Greengrass, Kinesis Firehose ( captures and loads streaming data in storage and business intelligence (BI) tools to enable near real-time analytics in the AWS cloud), Kinesis Streams (for rapid and continuous data intake and aggregation.), AWS IoT Things Graph (makes it easy to visually connect different devices and web services to build IoT applications.)
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.

20

Category:Object Storage and Content delivery
Description: Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export Snowball, CloudFront, Elastic Block Store (EBS), Elastic File System, S3 Infrequent Access (IA), S3 Glacier, AWS Backup, Storage Gateway, AWS Import/Export Disk, Amazon S3 Access Points(Easily manage access for shared data)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.

21

Category: Backend process logic
Description: Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
References:
[AWS]:AWS Step Functions ( lets you build visual workflows that enable fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code.)
[Azure]:Logic Apps (cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.)
[Google]:Dataflow ( fully managed service for executing Apache Beam pipelines within the Google Cloud Platform ecosystem.)
Tags:#AWSStepFunctions, #LogicApps, #Dataflow
Differences: AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. AWS Step Functions belongs to \”Cloud Task Management\” category of the tech stack, while Google Cloud Dataflow can be primarily classified under \”Real-time Data Processing\”. According to the StackShare community, Google Cloud Dataflow has a broader approval, being mentioned in 32 company stacks & 8 developers stacks; compared to AWS Step Functions, which is listed in 19 company stacks and 7 developer stacks.

22

Category: Enterprise application services
Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
References:
[AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index)
[Azure]:Office 365
[Google]:G Suite
Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite
Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC

23

Category: Networking
Description: Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
References:
[AWS]:Virtual Private Cloud (VPC), Cloud virtual networking, Subnets, Elastic Network Interface (ENI), Route Tables, Network ACL, Secutity Groups, Internet Gateway, NAT Gateway, AWS VPN Gateway, AWS Route 53, AWS Direct Connect, AWS Network Load Balancer, VPN CloudHub, AWS Local Zones, AWS Transit Gateway network manager (Centrally manage global networks)
[Azure]:Virtual Network(provide services for building networks within Azure.),Subnets (network resources can be grouped by subnet for organisation and security.), Network Interface (Each virtual machine can be assigned one or more network interfaces (NICs)), Network Security Groups (NSG: contains a set of prioritised ACL rules that explicitly grant or deny access), Azure VPN Gateway ( allows connectivity to on-premise networks), Azure DNS, Traffic Manager (DNS based traffic routing solution.), ExpressRoute (provides connections up to 10 Gbps to Azure services over a dedicated fibre connection), Azure Load Balancer, Network Peering, Azure Stack (Azure Stack allows organisations to use Azure services running in private data centers.), Azure Load Balancer , Azure Log Analytics, Azure DNS,
[Google]:Cloud Virtual Network, Subnets, Network Interface, Protocol fowarding, Cloud VPN, Cloud DNS, Virtual Private Network, Cloud Interconnect, CDN interconnect, Cloud DNS, Stackdriver, Google Cloud Load Balancing,
Tags:#VPC, #Subnets, #ACL, #VPNGateway, #CloudVPN, #NetworkInterface, #ENI, #RouteTables, #NSG, #NetworkACL, #InternetGateway, #NatGateway, #ExpressRoute, #CloudInterConnect, #StackDriver
Differences: Subnets group related resources, however, unlike AWS and Azure, Google do not constrain the private IP address ranges of subnets to the address space of the parent network. Like Azure, Google has a built in internet gateway that can be specified from routing rules.

24

Category: Management
Description: A unified management console that simplifies building, deploying, and operating your cloud resources.
References:
[AWS]: AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources)
[Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health
[Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver
Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter
Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.

25

Category: DevOps and application monitoring
Description: Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments; Cloud services for collaborating on code development; Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services; Fully managed build service that supports continuous integration and deployment.
References:
[AWS]:AWS CodePipeline(orchestrates workflow for continuous integration, continuous delivery, and continuous deployment), AWS CloudWatch (monitor your AWS resources and the applications you run on AWS in real time. ), AWS X-Ray (application performance management service that enables a developer to analyze and debug applications in aws), AWS CodeDeploy (automates code deployments to Elastic Compute Cloud (EC2) and on-premises servers. ), AWS CodeCommit ( source code storage and version-control service), AWS Developer Tools, AWS CodeBuild (continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. ), AWS Command Line Interface (unified tool to manage your AWS services), AWS OpsWorks (Chef-based), AWS CloudFormation ( provides a common language for you to describe and provision all the infrastructure resources in your cloud environment.), Amazon CodeGuru (for automated code reviews and application performance recommendations)
[Azure]:Azure Monitor, Azure DevOps, Azure Developer Tools, Azure CLI Azure PowerShell, Azure Automation, Azure Resource Manager , VM extensions , Azure Automation
[Google]:DevOps Solutions (Infrastructure as code, Configuration management, Secrets management, Serverless computing, Continuous delivery, Continuous integration , Stackdriver (combines metrics, logs, and metadata from all of your cloud accounts and projects into a single comprehensive view of your environment)
Tags: #CloudWatch, #StackDriver, #AzureMonitor, #AWSXray, #AWSCodeDeploy, #AzureDevOps, #GoogleDevopsSolutions
Differences: CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. Azure DevOps provides unlimited private Git hosting, cloud build for continuous integration, agile planning, and release management for continuous delivery to the cloud and on-premises. Includes broad IDE support.

SageMakerAzure Machine Learning Studio

A collaborative, drag-and-drop tool to build, test, and deploy predictive analytics solutions on your data.

Alexa Skills KitMicrosoft Bot Framework

Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.

Amazon LexSpeech Services

API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.

Amazon LexLanguage Understanding (LUIS)

Allows your applications to understand user commands contextually.

Amazon Polly, Amazon Transcribe | Azure Speech Services

Enables both Speech to Text, and Text into Speech capabilities.
The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs.
Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Amazon RekognitionCognitive Services

Computer Vision: Extract information from images to categorize and process visual data.
Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.

Face: Detect, identy, and analyze faces in photos.

Emotions: Recognize emotions in images.

Alexa Skill SetAzure Virtual Assistant

The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.

Big data and analytics

Data warehouse

AWS RedshiftSQL Data Warehouse

Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.

Big data processing EMR | Azure Databricks
Apache Spark-based analytics platform.

EMR HDInsight

Managed Hadoop service. Deploy and manage Hadoop clusters in Azure.

Data orchestration / ETL

AWS Data Pipeline, AWS Glue | Data Factory

Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.

AWS GlueData Catalog

A fully managed service that serves as a system of registration and system of discovery for enterprise data sources

Analytics and visualization

AWS Kinesis Analytics | Stream Analytics

Data Lake Analytics | Data Lake Store

Storage and analysis platforms that create insights from large quantities of data, or data that originates from many sources.

QuickSightPower BI

Business intelligence tools that build visualizations, perform ad hoc analysis, and develop business insights from data.

CloudSearchAzure Search

Delivers full-text search and related search analytics and capabilities.

Amazon AthenaAzure Data Lake Analytics

Provides a serverless interactive query service that uses standard SQL for analyzing databases.

Compute

Virtual servers

Elastic Compute Cloud (EC2)Azure Virtual Machines

Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.

AWS BatchAzure Batch

Run large-scale parallel and high-performance computing applications efficiently in the cloud.

AWS Auto ScalingVirtual Machine Scale Sets

Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.

VMware Cloud on AWSAzure VMware by CloudSimple

Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.

Containers and container orchestrators

EC2 Container Service (ECS), FargateAzure Container Instances

Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.

EC2 Container RegistryAzure Container Registry

Allows customers to store Docker formatted images. Used to create all types of container deployments on Azure.

Elastic Container Service for Kubernetes (EKS)Azure Kubernetes Service (AKS)

Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.

App MeshService Fabric Mesh

Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking.
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.

Serverless

AWS Lambda | Azure Functions

Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code

Database

Relational database

AWS RDS | SQL Database Azure Database for MySQL Azure Database for PostgreSQL

Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.

NoSQL / Document

DynamoDB and SimpleDBAzure Cosmos DB

A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.

Caching

AWS ElastiCache | Azure Cache for Redis

An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.

Database migration

AWS Database Migration ServiceAzure Database Migration Service

Migration of database schema and data from one database format to a specific database technology in the cloud.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

DevOps and application monitoring

AWS CloudWatch, AWS X-Ray | Azure Monitor

Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.
AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.

AWS CodeDeploy, AWS CodeCommit, AWS CodePipeline | Azure DevOps

A cloud service for collaborating on code development.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.

AWS Developer ToolsAzure Developer Tools

Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services.
The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.

AWS CodeBuild | Azure DevOps

Fully managed build service that supports continuous integration and deployment.

AWS Command Line Interface | Azure CLI Azure PowerShell

Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

AWS OpsWorks (Chef-based)Azure Automation

Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources.
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.

AWS CloudFormation | Azure Resource Manager , VM extensions , Azure Automation

Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks.
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

Networking

Area

Cloud virtual networking, Virtual Private Cloud (VPC) | Virtual Network

Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.

Cross-premises connectivity

AWS VPN Gateway | Azure VPN Gateway

Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).

DNS management

AWS Route 53 | Azure DNS

Manage your DNS records using the same credentials and billing and support contract as your other Azure services

Route 53 | Traffic Manager

A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.

Dedicated network

AWS Direct Connect | ExpressRoute

Establishes a dedicated, private network connection from a location to the cloud provider (not over the Internet).

Load balancing

AWS Network Load Balancer | Azure Load Balancer

Azure Load Balancer load-balances traffic at layer 4 (TCP or UDP).

Application Load Balancer | Application Gateway

Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.

Internet of things (IoT)

AWS IoT | Azure IoT Hub

A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale.

AWS Greengrass | Azure IoT Edge

Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.

Kinesis Firehose, Kinesis Streams | Event Hubs

Services that allow the mass ingestion of small data inputs, typically from devices and sensors, to process and route the data.

AWS IoT Things Graph | Azure Digital Twins

Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.

Management

Trusted Advisor | Azure Advisor

Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.

AWS Usage and Billing Report | Azure Billing API

Services to help generate, monitor, forecast, and share billing data for resource usage by time, organization, or product resources.

AWS Management Console | Azure portal

A unified management console that simplifies building, deploying, and operating your cloud resources.

AWS Application Discovery Service | Azure Migrate

Assesses on-premises workloads for migration to Azure, performs performance-based sizing, and provides cost estimations.

Amazon EC2 Systems Manager | Azure Monitor

Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.

AWS Personal Health Dashboard | Azure Resource Health

Provides detailed information about the health of resources as well as recommended actions for maintaining resource health.

Security, identity, and access

Authentication and authorization

Identity and Access Management (IAM) | Azure Active Directory

Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.

Identity and Access Management (IAM) | Azure Role Based Access Control

Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

AWS Organizations | Azure Subscription Management + Azure RBAC

Security policy and role management for working with multiple accounts.

Multi-Factor Authentication | Multi-Factor Authentication

Safeguard access to data and applications while meeting user demand for a simple sign-in process.

AWS Directory Service | Azure Active Directory Domain Services

Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.

Cognito | Azure Active Directory B2C

A highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities.

AWS Organizations | Azure Policy

Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.

AWS Organizations | Management Groups

Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.

Encryption

Server-side encryption with Amazon S3 Key Management Service | Azure Storage Service Encryption

Helps you protect and safeguard your data and meet your organizational security and compliance commitments.

Key Management Service AWS KMS, CloudHSM | Key Vault

Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).

Firewall

Web Application Firewall | Application Gateway – Web Application Firewall

A firewall that protects web applications from common web exploits.

Web Application Firewall | Azure Firewall

Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.

Security

Inspector | Security Center

An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.

Certificate Manager | App Service Certificates available on the Portal

Service that allows customers to create, manage, and consume certificates seamlessly in the cloud.

GuardDuty | Azure Advanced Threat Protection

Detect and investigate advanced attacks on-premises and in the cloud.

AWS Artifact | Service Trust Portal

Provides access to audit reports, compliance guides, and trust documents from across cloud services.

AWS Shield | Azure DDos Protection Service

Provides cloud services with protection from distributed denial of services (DDoS) attacks.

Storage

Object storage

Simple Storage Services (S3) | Azure Blob storage

Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.

Virtual server disks

Elastic Block Store (EBS) | Azure managed disks

SSD storage optimized for I/O intensive read/write operations. For use as high-performance Azure virtual machine storage.

Shared files

Elastic File System | Azure Files

Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.

Archiving and backup

S3 Infrequent Access (IA) | Azure Storage cool tier

Cool storage is a lower-cost tier for storing data that is infrequently accessed and long-lived.

S3 Glacier | Azure Storage archive access tier

Archive storage has the lowest storage cost and higher data retrieval costs compared to hot and cool storage.

AWS Backup | Azure Backup

Back up and recover files and folders from the cloud, and provide offsite protection against data loss.

Hybrid storage

Storage Gateway | StorSimple

Integrates on-premises IT environments with cloud storage. Automates data management and storage, plus supports disaster recovery.

Bulk data transfer

AWS Import/Export Disk | Import/Export

A data transport solution that uses secure disks and appliances to transfer large amounts of data. Also offers data protection during transit.

AWS Import/Export Snowball, Snowball Edge, Snowmobile | Azure Data Box

Petabyte- to exabyte-scale data transport solution that uses secure data storage devices to transfer large amounts of data to and from Azure.

Web applications

Elastic Beanstalk | App Service

Managed hosting platform providing easy to use services for deploying and scaling web applications and services.

API Gateway | API Management

A turnkey solution for publishing APIs to external and internal consumers.

CloudFront | Azure Content Delivery Network

A global content delivery network that delivers audio, video, applications, images, and other files.

Global Accelerator | Azure Front Door

Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.

Miscellaneous

Backend process logic

AWS Step Functions | Logic Apps

Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.

Enterprise application services

Amazon WorkMail, Amazon WorkDocs | Office 365

Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.

Gaming

GameLift, GameSparks | PlayFab

Managed services for hosting dedicated game servers.

Media transcoding

Elastic Transcoder | Media Services

Services that offer broadcast-quality video streaming services, including various transcoding technologies.

Workflow

Simple Workflow Service (SWF) | Logic Apps

Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.

Hybrid

Outposts | Azure Stack

Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.

How does a business decide between Microsoft Azure or AWS?

Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).

Some of the main things it comes down to is compute options, pricing, and purchasing options.

Here’s a brief comparison of the compute option features across cloud providers:

Here’s an example of a few instances’ costs (all are Linux OS):

Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.

Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.

Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.

With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.

You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post

Source: AWS to Azure services comparison – Azure Architecture

Pros and Cons of Cloud Computing

Cloud User insurance and Cloud Provider Insurance

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are the Pros and Cons of Cloud Computing?

Cloud computing is the new big thing in Information Technology. Everyone, every business will sooner or later adopt it, because of hosting cost benefits, scalability and more.

This blog outlines the Pros and Cons of Cloud Computing, Pros and Cons of Cloud Technology, Faqs, Facts, Questions and Answers Dump about cloud computing.

AWS Cloud Practitioner Exam Prep App – Free

AWS Certified Cloud Practitioner Exam Prep App: Pros and Cons of Cloud Computing
AWS Certified Cloud Practitioner Exam Prep PWA App

Cloud Practitioner Exam Prep AWS vs Azure vs Google
Cloud Practitioner Exam Prep AWS vs Azure vs Google

What is cloud computing?

Cloud computing is an information technology paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.
Simply put, cloud computing is the delivery of computing services including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs change.

What are the Pros of using cloud computing? What are characteristics of cloud computing?

  • Trade Capital expense for variable expense
  • Benefit from massive economies of scale
  • Stop guessing capacity
  • Increase speed and agility
  • Stop spending money on running and maintaining data centers
  • Go global in minutes
  • Benefits of AWS Cloud Computing - Pros and Cons of Cloud Computing
    Benefits of AWS Cloud Computing




AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)
  • Cost effective & Time saving: Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters; the racks of servers, the round-the-clock electricity for power and cooling, and the IT experts for managing the infrastructure.
  • The ability to pay only for cloud services you use, helping you lower your operating costs.
  • Powerful server capabilities and Performance: The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
  • Powerful and scalable server capabilities: The ability to scale elastically; That means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when they’re needed, and from the right geographic location.
  • SaaS ( Software as a service). Software as a service is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet, or PC.
  • PaaS ( Platform as a service). Platform as a service refers to cloud computing services that supply an on-demand environment for developing, testing, delivering, and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network, and databases needed for development.
  • IaaS ( Infrastructure as a service). The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis
  • Serverless: Running complex Applications without a single server. Overlapping with PaaS, serverless computing focuses on building app functionality without spending time continually managing the servers and infrastructure required to do so. The cloud provider handles the setup, capacity planning, and server management for you. Serverless architectures are highly scalable and event-driven, only using resources when a specific function or trigger occurs.
  • Infrastructure provisioning as code, helps recreating same infrastructure by re-running the same code in a few click.
  • Automatic and Reliable Data backup and storage of data: Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network.
  • Increase Productivity: On-site datacenters typically require a lot of “racking and stacking”—hardware setup, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.
  • Security: Many cloud providers offer a broad set of policies, technologies, and controls that strengthen your security posture overall, helping protect your data, apps, and infrastructure from potential threats.
  • Speed: Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning. In a cloud computing environment, new IT resources are only a click away. This means that the time those resources are available to your developers is reduced from weeks to minutes. As a result, the organization experiences a dramatic increase in agility because the cost and time it takes to experiment and develop is lower
  • Go global in minutes
    Easily deploy your application in multiple regions around the world with just a few clicks. This means that you can provide a lower latency and better experience for your customers simply and at minimal cost.

What are the Cons of using cloud computing?

  • Privacy: Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information.Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services.
  • Security: According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and API’s, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities.
  • Ownership of Data: There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership.
  • Limited Customization Options: Cloud computing is cheaper because of economics of scale, and—like any outsourced task—you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want.
  • Downtime: Technical outages are inevitable and occur sometimes when cloud service providers (CSPs) become overwhelmed in the process of serving their clients. This may result to temporary business suspension.
  • Insurance : It can be expensive to insure the customer and business data and infrastructure hosted in the cloud. A cyber insurance is necessary when using the cloud.
  • Other concerns of cloud computing.

      • Users with specific records-keeping requirements, such as public agencies that must retain electronic records according to statute, may encounter complications with using cloud computing and storage. For instance, the U.S. Department of Defense designated the Defense Information Systems Agency (DISA) to maintain a list of records management products that meet all of the records retention, personally identifiable information (PII), and security (Information Assurance; IA) requirements
      • Cloud storage is a rich resource for both hackers and national security agencies. Because the cloud holds data from many different users and organizations, hackers see it as a very valuable target.
    • Piracy and copyright infringement may be enabled by sites that permit filesharing. For example, the CodexCloud ebook storage site has faced litigation from the owners of the intellectual property uploaded and shared there, as have the GrooveShark and YouTube sites it has been compared to.

What are the different types of cloud computing?

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book


  • Public clouds: A cloud is called a “public cloud” when the services are rendered over a network that is open for public use. They are owned and operated by a third-party cloud service providers, which deliver their computing resources, like servers and storage, over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser. For infrastructure as a service (IaaS) and platform as a service (PaaS), Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) hold a commanding position among the many cloud companies.
  • Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.
  • Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premise resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options, and helps optimize your existing infrastructure, security, and compliance.
  • Community Cloud: A community cloud in computing is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns, whether managed internally or by a third-party and hosted internally or externally. This is controlled and used by a group of organizations that have shared interest. The costs are spread over fewer users than a public cloud, so only some of the cost savings potential of cloud computing are realized.


Other AWS Facts and Summaries and Questions/Answers Dump

Reference

Cloud User insurance and Cloud Provider Insurance

Cloud User insurance and Cloud Provider Insurance

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Cloud User insurance and Cloud Provider Insurance

In this blog, we are going to explore the following topics and provide some suggestions and recommendations:

As cloud user, cloud customer, company storing customer data in the cloud, you probably have a lot of personal or private data hosted in various infrastructure in the cloud. Losing that data or having the data accessed by hackers or unauthorized third party can be very harmful both financially and emotionally to you or your customers. A cloud User or Customer Insurance can protect you against data lost or stolen data. Practically, the cloud computing insurance is a cyber liability policy that covers web-based services. Before looking for a customer insurance in the cloud, you need to clarify “What data should the insurance cover and under which governing laws?“, “What data can be considered a loss?” . The good news is : as cloud adoption is increasing in the insurance industry, insurers have the opportunity to better understand their operations models and to implement tailored insurance solutions for cloud.

Cloud Data loss can happen in the following forms:

First Party Losses: losses where the cloud provider incurs damages. Those types of losses include:


  • Destruction of Data
  • Denial of Service Attack (DOS)
  • Virus, Malware and Spyware
  • Human Error
  • Electrical Malfunctions and Power Surges in data centers
  • Natural Disasters
  • Network Failures
  • Cyber Extortion

Each of the above exposures to loss would result in direct damages to the insured, or first-party loss.

Third-Party Losses – damages that would occur to customers outside of the cloud provider. These types of losses include:

  • Breach of Privacy
  • Misuse of Private Personal Information
  • Defamation or Slander
  • Transmission of Malicious Content

The above exposures could result in a company being held liable for the damages caused to others (liability).


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Cyber insurance is a form of insurance for businesses and individuals against internet-based risks. The most common risk that is insured against is data breaches. … It also covers losses from network security breaches, theft of intellectual property and loss of privacy.

Data Compromise coverage insures a commercial entity when there is a data breach, theft or unauthorized disclosure of personal information. … Thus Cyber Liability covers both the expenses to notify affected individuals of data breaches and the expenses to make the insured whole for their own damages incurred.

Some insurance companies that specialize in Cyber Insurance include:

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Contact an Independent Insurance Agent near you that writes Cyber Insurance and ask them to get multiple quotes for your business.

However, a more effective risk management solution might be loss control rather than financing. If you encrypt your data at rest and set up and adopt a process of automatic regular backups, and geographically distribute those backups , then you have effectively minimized the potential costs of loss.

Cyber Insurance is not yet standardized as many other forms of commercial insurance. Therefore, breadth of coverage and pricing can vary widely.

Below is AWS commitment to data privacy:

  • Access: As a customer, you maintain full control of your content and responsibility for configuring access to AWS services and resources. We provide an advanced set of access, encryption, and logging features to help you do this effectively (e.g., AWS Identity and Access Management, AWS Organizations and AWS CloudTrail). We provide APIs for you to configure access control permissions for any of the services you develop or deploy in an AWS environment. We do not access or use your content for any purpose without your consent. We never use your content or derive information from it for marketing or advertising.
  • Storage: You choose the AWS Region(s) in which your content is stored and the type of storage. You can replicate and back up your content in more than one AWS Region. We will not move or replicate your content outside of your chosen AWS Region(s) without your consent, except as legally required and as necessary to maintain the AWS services.
  • Security: You choose how your content is secured. We offer you strong encryption for your content in transit and at rest, and we provide you with the option to manage your own encryption keys. These features include:
    • Data encryption capabilities available in AWS storage and database services, such as Amazon Elastic Block Store, Amazon Simple Storage Service, Amazon Relational Database Service, and Amazon Redshift.
    • Flexible key management options, including AWS Key Management Service (KMS), allow customers to choose whether to have AWS manage the encryption keys or enable customers to keep complete control over their keys.
    • AWS customers can employ Server-Side Encryption (SSE) with Amazon S3-Managed Keys (SSE-S3), SSE with AWS KMS-Managed Keys (SSE-KMS), or SSE with Customer-Provided Encryption Keys (SSE-C).
  • Disclosure of customer content: We do not disclose customer information unless we’re required to do so to comply with a legally valid and binding order. Unless prohibited from doing so or there is clear indication of illegal conduct in connection with the use of Amazon products or services, Amazon notifies customers before disclosing content information.
  • Security Assurance: We have developed a security assurance program that uses best practices for global privacy and data protection to help you operate securely within AWS, and to make the best use of our security control environment. These security protections and control processes are independently validated by multiple third-party independent assessments

Microsoft Azure Data Privacy and protection Commitment

Google Cloud commitment to data privacy and security:

What types of business insurance are available?

  • Property and Casualty Insurance: Property insurance covers the physical location of the business and its contents from things like fire, theft, flood, and earthquakes—although read the terms carefully to make sure they include everything you need. Casualty insurance, on the other hand, covers the operation of the business, but the two are usually grouped together in policies.
  • Auto Insurance: Auto insurance protects you against financial loss if you have an accident. It is a contract between you and the insurance company.
  • Liability Insurance: Liability insurance is insurance that provides protection against claims resulting from injuries and damage property.
  • Business Insurance: Business interruption insurance can make up for lost cash flow and profits incurred because of an event that has interrupted your normal business operations.
  • Health and Disability Insurance: Health insurance provides health coverage for you and your employees. This insurance covers your employees for the expenses and loss of income caused by non work-related injuries, illnesses, and disabilities and death from any cause.
  • Life Insurance: Life and disability insurance covers your business in the event of the death or disability of key owners.
  • Cyber Insurance: Cover Data loss, destruction of data, privacy breach, Denial of Service Attack (DOS), Network failure, Transmission of Malicious Content, Misuse of personal or private information, etc.
  • Crime & Employee Dishonesty Insurance: To cover your business for fraudulent acts committed by your employees, e.g. theft or embezzlement of money, securities, and other business-owned property and for burglary, theft, and robbery of cash and other representations of money, e.g. money orders, postage stamps, travelers checks, and readily convertible securities, e.g. bearer bonds;
  • Mandatory Workers Compensation Insurance: To cover your employees for injuries and illnesses sustained during the course of employment. This would include medical expenses and loss of income due to a work-related disability;
  • Transportation/Inland & Ocean Marine Insurance: To pay for loss of damage to property you own or are responsible for while it is being transported or shipped to or from customers, manufacturers, processors, assemblers, warehouses, etc. by air, ship, or land vehicles either domestically or internationally.
  • Umbrella Liability Insurance: To provide an additional layer of liability insurance over your primary automobile liability, general liability, employers liability, and, if applicable, watercraft or aircraft liability policies;
  • Directors & Officers Liability Insurance: To defend your business and its directors or officers against allegations that they mismanaged the business in some way which caused financial loss to your clients (and/or others) and pay money damages in a court trial or settlement;
  • Condos Unit Owners Personal Insurance & Landlord / Rental Property Insurance: Cover expenses that come from having a loss within your property. Whether the unit owner is living in their unit or not, it is your responsibility to ensure that your personal assets and liabilities are adequately protected by your own personal insurance policy. This coverage includes all the content items that are brought into a unit or stored in a storage locker or premises, such as furnishings, electronics, clothing, etc. Most policies out there will also cover personal property while it is temporary off premises, on vacation for example.
  • Landlord property coverage is to protect the property that you own within your rental unit, which includes but is not limited to, appliances, window coverings, or if you rent out your unit fully furnished, then all of that property that is yours.
  • Rental Property insurance coverage allows you to protect you revenue source. Your property is your responsibility and if you property gets damaged by an insured peril, and your tenant can’t live there for a month or two (or more), you can purchase insurance to replace that rental income for the period of time your property is inhabitable.

Do online businesses need insurance?

All businesses need insurance. Here are some suggestions:

Property Insurance: To cover your owned, non-owned, and leased business property (contents, buildings if applicable, computers, office supplies, and any other property that you need to operate your business) for such perils as fire, windstorm, smoke damage, water damage, and theft.

EDP Insurance: To cover your computer hardware and software for such perils as mechanical breakdown and electrical injury;

Cyber Property and Liability Insurance: To cover your business for its activities on the Internet. Cyber Property coverages apply to losses sustained by your company directly. An example is damage to your company’s electronic data files caused by a hacker/security breach. Cyber Liability coverages apply to claims against your company by people who have been injured as a result of your actions or failure to act. For instance, a client sues you for negligence after his personal data, e.g credit card numbers or confidential information is stolen from your computer system and released online.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Loss of Income (Business Interruption) Insurance: To cover your business for the loss of income you would sustain because it was damaged by a covered peril under your property insurance, e.g. fire, windstorm, smoke damage, and theft;

Read this blog about insurance for E-commerce

Thinking of purchasing cyber insurance? Make sure the policy you choose covers more than paying ransomware. Paying cyber criminals should be a last resort. Your policy should include cleaning & rebuilding current systems, hiring experts, & purchasing new protections.

Resource:

1- Quora

2- AWS Data privacy

3- Does Cyber insurance makes sense?

4- What does cyber insurance do? What does it protect?

The purpose of cyber security is to protect all forms of digital data. Protecting personal information (SSN, credit card information, etc.), protecting proprietary information .(Facebook algorithms, Tesla vehicle designs, etc.), and other forms of digital data.

5- Cloud based Insurance Providers

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

* https://www.cloudinsurance.io

6- Understanding Cloud insurance

Cloud computing insurance is meant to protect a cloud provider. The implementation of a system and the preservation of important information comes with risks. If anything goes wrong, such as an outage at a critical time that results in business interruption, your client can hold you responsible and seek damages. Cloud insurance can not only provide compensation to your client as a result of a claim against you, but can also cover your legal defense and lost income.

7- Ransomware still dominates the cyber threat landscape in 2019: Europol report.

Below is a listing of AWS certification exam quiz apps for all platforms:

AWS Certified Cloud practitioner Exam Prep FREE version: CCP, CLF-C01

IOS: https://apps.apple.com/ca/app/aws-certified-cloud-pract-prep/id1488832117

Microsoft/Windows10:https://www.microsoft.com/en-ca/p/aws-certified-cloud-practitioner-exam-preparation/9ns1xttj1d5s

Google play: https://play.google.com/store/apps/details?id=com.awscloudpractitonerexamprep.enoumen

Amazon App Store (Android): https://www.amazon.com/dp/B085MFT53J/ref=mp_s_a_1_2?keywords=cloud+practitioner&qid=1583633225&s=mobile-apps&sr=1-2

Web/PWA: https://aws-cloud-practitioner-exam.firebaseapp.com

Cloud Practitioner PRO Versions:

ios: https://apps.apple.com/ca/app/aws-certified-cloud-pract-pro/id1501104845

android google : https://play.google.com/store/apps/details?id=com.awscloudpractitonerexampreppro.enoumen

Amazon android: https://www.amazon.com/dp/B085HGKRMG/ref=pe_385040_118058080_TE_M1DP

Windows 10: https://www.microsoft.com/en-ca/p/aws-certified-cloud-practitioner-exam-preparation-quiz-pro/9phhz236gh4d

AWS Certified Solution Architect Associate Exam Prep FREE version: SAA, SAA-C01, SAA-C02

Google:
https://play.google.com/store/apps/details?id=com.awssolutionarchitectassociateexamprep.app

iOS:
https://apps.apple.com/ca/app/solution-architect-assoc-quiz/id1501225766[aws Certified Exam Quiz apps](https://apps.apple.com/ca/app/solution-architect-assoc-quiz/id1501225766)

Web(All platforms):
https://awscertifiedsolutionarchitectexamprep.com/

Amazon android: ‪http://www.amazon.com/dp/B085MG99H9/ref=cm_sw_r_tw_awdm_xs_pqfzEb4HSYJV1‬

Microsoft/Windows10: https://www.microsoft.com/en-ca/p/aws-certified-solution-architect-associate-exam-prep/9ncch3cgskmp

Solution Architect PRO versions:

Ios: https://apps.apple.com/ca/app/solution-architect-assoc-pro/id1501465417

Android google: https://play.google.com/store/apps/details?id=com.awssolutionarchitectassociateexampreppro.app

Windows10: not available yet

Amazon android: https://www.amazon.com/dp/B085HR898X/ref=pe_385040_118058080_TE_M1DP

AWS Certified Developer Associate Exam Prep: DVA-C01

android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

iOs: https://apps.apple.com/ca/app/aws-certified-developer-assoc/id1511211095

PRO version with mock exam ios: https://apps.apple.com/ca/app/aws-certified-dev-ass-dva-c01/id1506519319t

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

error: Content is protected !!