AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

You can translate the content of this page by selecting a language in the select box.

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900


Get it on Apple Books
Get it on Apple Books

  • Azure Solutions Architect Expert
    by /u/TrueAlbanian101 (Microsoft Azure Certifications) on September 25, 2022 at 1:03 pm

    Sorry if this is a dumb question, wanted to be sure before going for the next cert! I currently have the AZ-104 certification and planning on taking the AZ-305 certification. Looks like when you complete these two you receive an Azure Architect Solutions Expert certification. Do you need to renew AZ-104 and AZ-305 each year when you’re an expert or does it get combined to a single test? Thank you! submitted by /u/TrueAlbanian101 [link] [comments]

  • SC-900 CERTIFICATE DESIGN CHANGED?
    by /u/Puzzleheaded_Lime234 (Microsoft Azure Certifications) on September 25, 2022 at 4:20 am

    Does the Certifcate design of SC-900 has changed? ( sep 2022) I passed my exam today, when i will recieve the credly badge? submitted by /u/Puzzleheaded_Lime234 [link] [comments]

  • Ignite 2022. Any free cert this time?
    by /u/ahmedranaa (Microsoft Azure Certifications) on September 24, 2022 at 4:37 pm

    Microsoft Ignite is in 12-14 Oct 2022 Microsoft Ignite - Home Does any one know the list of free cert in it? if any submitted by /u/ahmedranaa [link] [comments]

  • DP-203 Attempt
    by /u/teja1394 (Microsoft Azure Certifications) on September 24, 2022 at 3:50 pm

    Exam scheduled: 24Sep, 2022 18:00PM IST No. of questions: 43 Time: 100 min Score: 648 - failed Out of 43 questions, there was one scenario given and based on that 5 questions were asked(These questions do not have option to review at last) In remaining 38, there are 5 questions based on scenario and few services were given and asked they're appropriate. Should choose b/w yes or no (These also don't have option to review) My questions are mostly from Synapse and related services, got questions from DataFactory, triggers, stream analytics, syntax questions from mySQL, spark. I felt exam was moderate level and more questions are based on scenarios. Resources I used: Udemy Course from Eshant Garg(Covered most topics in this course, but I feel not sufficient enough to clear exam) Whizlabs practice tests --> decent enough and covered the topics Reason I feel for not clearing: Did not go much through MS Learning paths. Previously I cleared AZ-900,DP-900 in first attempt where I completed learning paths. Will look at my detailed report of exam and will give another shot after preparation! submitted by /u/teja1394 [link] [comments]

  • Query Regarding Exam Voucher
    by /u/ahmedtm1 (Google Cloud Platform Certification) on September 24, 2022 at 9:55 am

    A few months back, I got an exam voucher and I registered for the exam. The voucher will expire on 30th Sep. But I have already used it. Now I feel like I'm not prepared well for the exam. So I want to reschedule the exam for the next month. My question is, will they charge me if I reschedule the exam for next month as the expiry date of the voucher is 30th Sep. submitted by /u/ahmedtm1 [link] [comments]

  • Best trainer for GCP - Architect?
    by /u/uuufffff (Google Cloud Platform Certification) on September 23, 2022 at 4:07 pm

    Hi, I'm aiming at Google PCA cert. For context, I'm a certified architect in both Azure and AWS. Who's the best trainer for Google Cloud? Looking for someone like John Savill for Azure. Thanks in advance. submitted by /u/uuufffff [link] [comments]

  • Recertification Az-200
    by /u/ahmedtm1 (Microsoft Azure Certifications) on September 23, 2022 at 10:28 am

    Recently, I received an email from Microsoft that my Az-200 certification will expire in 6 months, and to keep my certificate active for 1 more year I have to pass an assessment. It's my first time giving the recertification assessment. Can anyone please tell me what it is like? How much time and questions do we have? Also, the complexity of the questions and the passing criteria? More, when should I take the assessment? submitted by /u/ahmedtm1 [link] [comments]

  • Managing multiple employees Microsoft Learn efforts
    by /u/sinclairzx10 (Microsoft Azure Certifications) on September 22, 2022 at 8:47 pm

    Hey all, We're a direct CSP looking for a way to manage our team's efforts in Microsoft Learn. Is there a centralized portal where we can see the overall progress of our team? ​ From what I can gather I can only see our aligned certifications in the new solution designation competency page there is no overarching portal to view everyone's prorgess. Does anyone have any recommendations on how to navigate this challenge or point me in the right direction ? submitted by /u/sinclairzx10 [link] [comments]

  • Passed the AZ-140 today!!
    by /u/TrinsicX (Microsoft Azure Certifications) on September 22, 2022 at 8:18 pm

    I passed the (updated?) AZ-140, AVD specialty exam today with an 844. First MS certification in the bag! Edited to add: This video series from Azure Academy was a TON of help. https://youtube.com/playlist?list=PL-V4YVm6AmwW1DBM25pwWYd1Lxs84ILZT submitted by /u/TrinsicX [link] [comments]

  • New Twitch show! All Things AWS, a variety-style show for the cloud-curious
    by Deborah Strickland (AWS Training and Certification Blog) on September 22, 2022 at 5:02 pm

    We’re rolling out a brand-new Twitch series called All Things AWS! This new series helps you learn cloud concepts while having a little bit of goofy fun. All Things AWS features a mix of everything that makes learning great, with informative segments, Q&As with AWS leaders, skits, spoofs, and games. Join us on Thursday, September 29, at 4:00 p.m. PT.

  • How LG CNS is creating future AI leaders with immersive machine learning training
    by (Training & Certifications) on September 22, 2022 at 4:00 pm

    As a Korean technology leader specializing in digital transformation (DX), LG CNS partners with customers to help them achieve digital growth across fields including cloud, artificial intelligence (AI), big data, smart factory, and smart logistics. Towards the end of 2020, LG CNS was investigating how to drive digital transformation and growth within their own ranks to demonstrate and extend their core competitive advantage as a consulting service and system integration service provider. By enhancing their internal teams’ AI capabilities, they would ensure they could provide even more innovation and technical expertise for their customers’ own transformation journeys to maintain their position in the market, and continue to provide creative insights and thought leadership both within Korea and overseas.After investigating several external training programs, LG CNS decided to partner with Google Cloud Learning Services to guarantee they received the right level of training and support to further elevate their world-class team. They recognized and respected Google’s global leadership in AI and machine learning (ML), and believed Google was the only partner that could elevate their company to a global level, in line with their already prominent position within the Korean market.  LG CNS’s employees participated in 5 weeks of machine learning and 1 week of machine learning operations (MLOps) training through the Advanced Solutions Lab (ASL) via ASL Virtual. This immersive learning program enabled participants to collaborate and learn directly from Google Cloud engineers and ML experts, without having to attend a Google campus. LG CNS’ participants for this transformative training program were selected based on a strict criteria. They were all considered high performers within LG CNS, and held 1-2 years practical experience in the AI/ML field. With continuous competency development and care, many will progress to become AI development leaders within the company, ensuring LG CNS can remain at the forefront of their field and collaborate with other businesses to fuel their own DX solutions.“I hope that all trainees can grow through the Google Cloud ASL program, which has the world's best AI technology, and I look forward to taking responsibility for leading AI in each division.” — SVP of LG CNSGoogle Cloud Learning services’ early involvement in the organizational stages of this training process, and agile response to LG CNS’s requirements, ensured LG CNS could add the extra week of MLOps training to their program as soon as they began the initial ASL ML course. This productive, collaborative experience demonstrated the strength and flexibility of Google Cloud Learning services, and their capacity to tailor virtual learning content to meet the needs of the specific client and their business objectives.Following the success of this first round in 2021, LG CNS has done ASL ML/MLOps training for another cohort in 2022 to cultivate even more AI and ML expert groups. This empowers LG CNS to build even more capacity within their workforce to continue advancing and developing the most breakthrough technologies to support their customers’ own digital transformation and innovation.To learn more about how you can engage business innovation in your own organization through cloud education services, visit Google Cloud Training & Certification and get started on your own learning journey.Related ArticleDrive digital transformation, get Cloud Digital Leader certifiedDiscover Google Cloud Digital Leader no-cost training and certification discount, and attend free webinar on September 15, 2022.Read Article

  • On-Prem Sys Admin Moving towards Cloud Administration - Need help on Material
    by /u/pratyathedon (Microsoft Azure Certifications) on September 22, 2022 at 6:09 am

    Hi All, ​ 8 Years of On-Prem System Admin for Oil/Gas and Pharma Companies with a Background in DCS Systems. Have Application Development experience too for the industry-specific requirements(VB, Python, Node, SQL/Mongo,). I want to put more effort into Azure Certifications. Cleared my AZ-900 like 4 weeks ago. And now wish to Move ahead with AZ-104 and AZ-204 subsequently. I will be using the below materials for AZ-104 Cert Study(Theoretical) : 1) John Savill's Technical Training 2) freeCodeCamp.org AZ-104 / AZ-204 Cramps 3) MS Learn Labs : 1) https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/ 2) Labs from Whizlabs Test Prep: 1) TD / MeasureUP I need help in one more area, Can someone recommend a few more links/websites where I can get lab exercises, I learn faster / better when I actually do hands-on. Also, I hope these links are enough for Clearing AZ-104. submitted by /u/pratyathedon [link] [comments]

  • AZ-700 — what jobs can you be considered for?
    by /u/Yzzaf_3 (Microsoft Azure Certifications) on September 22, 2022 at 12:37 am

    Currently in a Azure networking role and studying for AZ-700. Curious as to what jobs I can get with AZ-700 and also just overall experience working in a Azure network environment submitted by /u/Yzzaf_3 [link] [comments]

  • Fun ways to learn besides docs/acg/youtube
    by /u/Neverchoosered (Microsoft Azure Certifications) on September 21, 2022 at 10:57 pm

    Hello all. Just seeing if anyone has any cool resources to help study and practice for the az104. I passed the az900 with just the docs but it seemed pretty practical. I have acg and have watched some YouTube videos to learn, along with Microsoft docs. Is there anything else out there to make it more fun? Build something cool and interesting to learn along the way and delete it? I messed around with the AWS cloud practitioner game and it was pretty cool. I haven't been able to find anything online for azure like it or just enjoyable. Sure videos and reads get it done but I get bored too quick and switch to something else. Thanks in advance! submitted by /u/Neverchoosered [link] [comments]

  • What Azure certification(s) are best If I want to manage applications within an azure environment?
    by /u/TW3RKUL33Z (Microsoft Azure Certifications) on September 21, 2022 at 6:29 pm

    I recently got the AZ900 and plan to test on the AZ104 soon. I am a sysadmin of a company and manage an Azure environment and application management, deployment and automation for the applications we use. Specifically, we are going to be taking on app identity management, access, management, automation, deployment as well as security. I have looked into the Azure development paths and documentation and it seems like AZ500 will be another good next step. Any other suggestions? submitted by /u/TW3RKUL33Z [link] [comments]

  • Google launches dedicated cloud training program for Ukrainians
    by (Training & Certifications) on September 21, 2022 at 4:00 pm

    Editor's note: This blog was originally published in Ukrainian on September 14. 2022.Google launches cloud technology training program to support Ukrainian businesses and IT professionals Around the world, organizations across multiple industries are in the midst of digitally transforming their businesses. And Ukrainian businesses are no different - they are looking for new ways to survive, grow and thrive digitally in an unstable and uncertain environment. The driving force behind these digital transformations for Ukrainian businesses will be people trained in the skills required to implement and maintain large-scale cloud deployments, particularly in areas like artificial intelligence, machine learning, data analytics, application development, security, and cloud architecture. We want to help Ukrainian people that are working for, and running these organizations to learn new cloud technology skills to empower them to build and grow organizations that will support the future of their country. Today we are launching the “Grow your career with Google Cloud”program for Ukrainians looking to develop world-class expertise with practical skills in cloud to fuel the rapid digital transformation of Ukraine and invest in its future with expanded job opportunities for IT professionals.It is our aim to train up to 10,000 Ukrainians in cloud technology by the end of 2023. The “Grow your career with Google Cloud” program will help IT developers and students gain the practical and in-demand skills they need to grow their careers or find new jobs. The ‘Grow your career with Google Cloud’ Ukraine Program is created for:IT specialists and developers who want to develop their cloud skills and career. IT students who will graduate next year and need cloud skills for future jobs.For effective learning, it is recommended to have at least a basic level of English.What will program participants receiveTwo months of no-cost access to Google Cloud Skills Boost, the definitive destination for Google Cloud learning.Opportunity to gain real-world hands-on experience by earning Google Cloud skill badges. Skill badges validate an individual’s cloud skills in support of reaching their cloud career goals.Individual learning paths aligned to job roles. These curated collections of content combine on-demand courses with hands-on learning. For beginners, we recommend the Getting Started with Google Cloud learning path as the content is localized in Ukrainian.Prizes for 3 and more earned skill badges.Access to cloud experts at regular Q&A sessions. More details are here.Ready to get started? Click here to register. Participants will be organized into cohorts, with the first cohort starting October 4, 2022. Once you have completed registration, you will receive an email with next steps, including information for upcoming webinars and cloud expert led Q&A sessions.  More information about the “Grow your career with Google Cloud” program is here.Related ArticleSign up for the Google Cloud Fly Cup ChallengeLearn more about how to participate in the Google Cloud Fly Cup, brought to you in partnership with The Drone Racing League.Read Article

  • A few administrative tips for new ESI users.
    by /u/IT_ISNT101 (Microsoft Azure Certifications) on September 21, 2022 at 12:17 pm

    Hey Everyone, I know there are quite a few ESI users here and after having done a *lot* of the ESI courses and exams I thought it may be useful to add some tips and to see if anyone else has tips they can share about making it the best experience they can. 1) Multi-day invites - potential for marked as not attending. I am currently on a course and the instructor explained that you should NEVER, ever use the same link/incorrect link in calendar (i.e. don't just re-use day 1 invite). Reason for this is that it goes to automatically update your attendance on the course and if the instructor isn't vigilant in making sure everyone attends "in the system" you may not get the credit. I am currently trying to remediate this situation that has happened to me. 2) Check your enrolments. Not everyone wants to take the exam but it is good to have proof that you attended. The link to use is https://esi.learnondemand.net/ - Course attendance will be listed under "My Training" with a completion status - See above warning on that one. I was looking for this and didn't even realise it was there! 3) Check your courses before you click on them to make sure they are correct. I know it sounds obvious but with that many languages and time zones it can be super easy to end up on one at some stupid time or even worse, a language you don't speak. It just pays to make sure and do it in advance so that a cancellation doesn't result in a black mark. 4) Don't be tempted to use your own Azure lab for the labs I know, cloudslice isn't great and its slow BUT the one advantage is that your instructor can actually see your lab and if you get stuck, help figure out what's wrong. They can't do that if you use your own sub. That's it for now. Hope it helps. submitted by /u/IT_ISNT101 [link] [comments]

  • Azure Conditional Access Policies - AZ 303
    by /u/BitAccomplished3461 (Microsoft Azure Certifications) on September 20, 2022 at 9:19 pm

    submitted by /u/BitAccomplished3461 [link] [comments]

  • DP-900 Exam Prep Question
    by /u/HavenHexed (Microsoft Azure Certifications) on September 20, 2022 at 5:54 pm

    I have been through the Microsoft Learn modules, watched John Savill's exam cram, and been through Scott Duffy's videos on Udemy for the DP-900. I bought the exam reference with the intention of reading it through, but it seems way more in depth than what the videos and learn modules presented. I did sign up for the Microsoft Virtual Training Days for the Data Fundamentals. Is all the extra information needed in the book? I am not a data admin or anything. I am more in systems admin with my boss wanting me to learn more about Azure. I have taken the AZ-900 and figured this would be a good one as well. Just trying to figure out if the book is worth the read for the exam. submitted by /u/HavenHexed [link] [comments]

  • New Twitch season of AWS Power Hour: Architecting starts September 27th!
    by Lauren Cutlip (AWS Training and Certification Blog) on September 20, 2022 at 3:47 pm

    AWS Power Hour: Architecting is back with six fun, engaging episodes that help you prepare for the newly updated AWS Certified Solutions Architect - Associate exam. Join us on Tuesdays at 7:30 a.m. PT from September 27 to November 1 for this live Twitch series.

  • Az 204 Renewal Question
    by /u/Pure-Question-6464 (Microsoft Azure Certifications) on September 20, 2022 at 12:02 pm

    I recently got an email informing me that my cert will expire on March 2023. I am wondering if I give the exam now instead of Feb 2023, will the subsequent expiry date start from now or from March 2023? submitted by /u/Pure-Question-6464 [link] [comments]

  • struggling with az-104
    by /u/scootscootman1 (Microsoft Azure Certifications) on September 20, 2022 at 11:24 am

    Hi All, I've been studying for az-104 for coming up 5 months, and also using it daily at my job (amongst a barrel of other random IT systems) i am really struggling. i decided to buy the measure up exam and i got 33%. I already failed az-104 once with 685. The next attempt my work has given my a voucher so i can't afford to fail it. i think i did so bad on the measureup because, first attempt and im really tired right now. i think i am just having problems remebering all the stuff. my memory isn't that great, i have anxiety and depression, which really makes that much worse. im actually starting to question if its even possible for me to pass this, i've done so much study and i was so enthusiastic, but now im just dead, my employer thinks im useless because they thought i should have passed it within 3 months. sometimes i try to study and the brain just can't tune in, i just get distracted by any small thing. i really want to focus on it, but i think my passion for this is just gone. what to do? maybe im using the wrong training materials? so far 80% complete on acloudguru az-104 course some of the microsoft learn modules whizzlab practise exams - all complete 4-5 with 80%+ just got the measureup exam today - 33% on first go i've been watching some john savill and also have my own tenant - spun up various vms' player with policy and backups, log analytics workspaces load balancers nsgs. ​ what am i doing wrong? i am beggining to think im just dumb - or its not that im dumb as its not like i can't understand what im reading, i just forget it quite quickly for some reason? submitted by /u/scootscootman1 [link] [comments]

  • AZ900 cert.
    by /u/jtect (Microsoft Azure Certifications) on September 19, 2022 at 6:06 pm

    Is anyone still doing AZ900 cert? submitted by /u/jtect [link] [comments]

  • Curriculum for AZ 900
    by /u/Engineer2309 (Microsoft Azure Certifications) on September 19, 2022 at 5:03 pm

    Hi. Anyone planning to take up AZ 900 this week? I feel like the syllabus has been cut short. I am not able to find modules on AI, IoT, Big Data services in the official site. Can anyone confirm? https://learn.microsoft.com/en-us/certifications/exams/az-900 submitted by /u/Engineer2309 [link] [comments]

  • Which one should I go for AZ-305 or AZ-400
    by /u/SnooHedgehogs1443 (Microsoft Azure Certifications) on September 19, 2022 at 12:00 am

    Hi All, ​ which one is the better one to have on your resume? submitted by /u/SnooHedgehogs1443 [link] [comments]

  • Passed AI-900! Tips & Resources Included!!
    by /u/_-readit-_ (Microsoft Azure Certifications) on September 18, 2022 at 10:33 pm

    Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900. Here's the order in which I passed my AWS and Azure certifications: SAA>DVA>SOA>DOP>SAP>CLF|AZ-900>DP-900>AI-900 I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can. Here's my study plan for AZ-900 and DP-900 exams: finish a popular video course aimed at the cert watch John Savill's study/exam cram take multiple practice exams scoring in 90s This is what I used for AI-900: Alan Rodrigues' video course (includes 2 practice exams) 👌 John Savill's study cram 💪 practice exams by Scott Duffy and in28Minutes Official 👍 knowledge checks in AI modules from MS learn docs 🙌 I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes. https://youtu.be/utknpvV40L0 👏 Just be clear on the topics explained by the above video and you'll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam. submitted by /u/_-readit-_ [link] [comments]

  • DP 100 or AI 100?
    by /u/Masoud_mirza (Microsoft Azure Certifications) on September 18, 2022 at 6:46 pm

    Hello, mates. Hope you are doing well. I am a senior student in Master of Data Analytics for Business, and I'm doing my master's final work, which is a project that should be done in Azure Machine Learning Studio. Considering my poor coding knowledge (which of course I should improve) and my tendency to learn Azure ML Studio, which certificate do you suggest I learn? DP 100 or AI 100? Thank you in advance for your time. submitted by /u/Masoud_mirza [link] [comments]

  • Are tutorial dojo's practice questions basically dumps?
    by /u/skan40 (Microsoft Azure Certifications) on September 18, 2022 at 5:15 pm

    I have observed some of their questions appearing in the real exam. submitted by /u/skan40 [link] [comments]

  • PL-300 (Power BI Data Analyst) vs DP-203 (Azure Data Engineer): Which is better, and why?
    by /u/Random-Helper-1 (Microsoft Azure Certifications) on September 18, 2022 at 5:54 am

    From the perspective of employability, future job prospects, barrier to entry etc which is better? Happy to read others’ opinions, thanks. submitted by /u/Random-Helper-1 [link] [comments]

  • GCP Cloud Engineer Cert practice exam choices
    by /u/phat1forever (Google Cloud Platform Certification) on September 18, 2022 at 12:52 am

    Hey, I have seen people mention Tutorial Dojo has good practice exams. Does that mean this: https://portal.tutorialsdojo.com/courses/google-certified-associate-cloud-engineer-practice-exams/#learndash-course-content Or this:https://www.udemy.com/course/google-certified-associate-cloud-engineer-practice-exams-gcp/ Currently I am using ACG and bought some practice exams on Udemy that are difficult, https://www.udemy.com/course/google-certified-associate-cloud-engineer-practice-tests-x , but I am still studying. I am open to other suggestions as well. submitted by /u/phat1forever [link] [comments]

  • AZ-900 Practice Exam -- is it out of date?
    by /u/cmh55264 (Microsoft Azure Certifications) on September 17, 2022 at 8:26 pm

    Hi all, I am taking my AZ-900 certification exam next week. I decided to take a practice exam on it today and could have done better. As I went through the learning modules I took detailed notes as well as make flashcards for all the key concepts/resources that are available through Azure. I bought the practice exam and took one today and was confused by a number of the questions because they seemed to relate to terminology that I did not take in my notes nor that I remember reading about at all in the learning modules. I am aware that Microsoft did some switching around regarding the requirements of the exam. After looking at what changed, it seems to me that most if not all of the terms I had encountered are no longer tested on. That then made me wonder if the practice tests that are offered now are out of date with the changes to the exam that Microsoft made back in May. For example, I saw many questions about advanced security and I know the newer course content greatly reduced information for this topic. Do any of you know if my assumption is correct or am I just missing something? I know that I still have work to do on my end for full preparation but it struck me odd the number of topics that were on the practice test that seem to either have been reduced or all out removed with the update in May. Any advice for the exam is appreciated too. submitted by /u/cmh55264 [link] [comments]

  • Just pased AZ-104
    by /u/Amazing_Prize_1988 (Microsoft Azure Certifications) on September 16, 2022 at 6:06 pm

    I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs. Received very little of this: Containers Storage Monitoring I passed with a 710 but a pass is a pass haha. Used tutorial dojos but the closest questions I found where in the Udemy testing exams. Regards, submitted by /u/Amazing_Prize_1988 [link] [comments]

  • Sign up for the Google Cloud Fly Cup Challenge
    by (Training & Certifications) on September 15, 2022 at 10:00 pm

    Are you ready to take your cloud skills to new heights? We’re excited to announce the Google Cloud Fly Cup Challenge, created in partnership with The Drone Racing League (DRL) and taking place at Next ‘22 to usher in the new era of tech-driven sports. Using DRL race data and Google Cloud analytics tools, developers of any skill level will be able to predict race outcomes and provide tips to DRL pilots to help enhance their season performance. Participants will compete for a chance to win an all-expenses-paid trip to the season finale of the DRL World Championship Race and be crowned the champion on stage.  How it works: Register for Next 2022 and navigate to the Developer Zone challenges to unlock the gameComplete each stage of the challenge to advance and climb the leaderboardWin prizes, boost skills and have fun!There will be three stages of the competition, and each will increase in level of difficulty. The first stage kicks off on September 15th, where developers will prepare data and become more familiar with the tools for data-driven analysis and predictions with Google ML Tools. There are over 500 prizes up for grabs, and all participants will receive an exclusive custom digital badge, and an opportunity to be celebrated for their achievements alongside DRL Pilots. There will be one leaderboard that will cumulate scores throughout the competition and prizes will be awarded as each stage is released. Stage 1: DRL Recruit: Starting on September 15th, start your journey here to get an understanding of DRL data by loading and querying race statistics. You will build simple reports to find top participants and fastest race times. Once you pass this lab you will be officially crowned a DRL recruit and progress for a chance to build on your machine learning skills and work with two more challenge labs involving predictive ML models. Prize: The top 25 on the leaderboard will win custom co-branded DRL + Google Cloud merchandise.Stage 2: DRL Pilot: Opening in conjunction with the first day of Next 2022 on October 11, in this next stage you will develop a model which can predict a winner in a head to head competition and a score for each participant, based on a pilots profile and flight history. Build a "pilot profile card" that analyzes the number of crashes and lap times and compares it to other pilots. Fill out their strengths and weaknesses and compare them to real life performances, and predict the winner of the DRL Race in the Cloud at Next 2022, and be crowned top developer for this stage.Prize: The first 500 participants to complete stage 2 of the contest will receive codes to download DRL’s Simulator on Steam.Stage 3: DRL Champion: Continue this journey throughout the DRL championship season. Using the model developed in Stage 2. Use data from past races to score participants and predict outcomes. Provide pilots with real life tips and tricks to help improve their performance. The developer at the top of the leaderboard at the end of December 2022 will win an expenses-paid VIP trip to DRL’s final race in early 2023. Prize: Finish in the top 3 for an opportunity to virtually present your tips and tricks to professional DRL Pilots before the end of the 2022-2023 race seasonTop the leaderboard as the Grand Champion and win an expenses paid VIP experience to travel to a DRL Championship Race in early 2023 and be celebrated on stage. For more information on prizes and terms please visit the DRL and Google Cloud website.  Ready to Fly? The Google Cloud Fly Cup Challenge opens today and will remain available on the Next ‘22 portal through December 31, 2022 when the winner will be announced. We are looking forward to seeing how you innovate and build together for the next era of tech-driven sports. Let’s fly!

  • The value of data and pursuing the AWS Certified Data Analytics – Specialty certification
    by Carole Suarez (AWS Training and Certification Blog) on September 15, 2022 at 6:50 pm

    Gain tips and guidance from four AWS Solutions Architects for how you can build your skills and expertise in data analytics and pursue the AWS Certified Data Analytics – Specialty certification.

  • Register for Google Cloud Next
    by (Training & Certifications) on September 14, 2022 at 4:00 pm

    Google Cloud Next ‘22 kicks off on October 11 at 9AM PDT with a 24-hour “follow the sun” global digital broadcast featuring live keynotes from five locations across the globe — New York, Sunnyvale, Tokyo, Bengaluru, and Munich. You’ll hear from the people shaping the future of computing and have the opportunity to learn from Google Cloud leaders and community influencers about ways they are solving the biggest challenges facing organizations today.You can experience Next ‘22 digitally and in-person. Here’s how: Join us digitally through the Google Cloud Next website to learn about the latest news, products, and Google Cloud technology and to access technical and training content.  Visit us locally at one of 200 physical events across six continents. In conjunction with our Partner and Developer Communities, we are excited to bring a series of small physical events around the world. Be sure to register for Next ‘22 so we can alert you about physical events in your area soon. At Next ‘22, you’ll find knowledge and expertise to help for whatever you’re working on with content tracks personalized for application developers, data scientists, data engineers, system architects, and low/no-code developers.To make Google Cloud Next as inclusive as possible, it is free for all attendees. Here’s more about Next ‘22 for you to get excited:Experience content in your preferred language. The Next ‘22 web experience will be translated into nine languages using Cloud Translate API. For Livestream and session content, you can turn on YouTube for CC (closed captions), which supports 180+ languages.Engineer your own playlist. Create, build, explore, and share your own custom playlists and discover playlists curated by Google Cloud.Hang with fellow developers. Gain access to dedicated developer zones through Innovators Hive livestreams, in-person event registration, a developer badging experience, challenges, curated resources and more fun with drone racing.Engage with your community. Use session chats to engage with other participants and ask questions to presenters, so you can fully immerse yourself in the content.Register for Next ‘22Connect with experts, get inspired, and boost your skills.There’s no cost to join any of the Next ‘22 experience. We can’t wait to see you and we’ll be sure to keep you posted about ways to engage locally with the Google Cloud community in your area. Say hello to tomorrow. It’s here today, at Next.Register today.Related ArticleRead Article

  • Digital Cloud Leader practice tests
    by /u/FightForYourDreams (Google Cloud Platform Certification) on September 13, 2022 at 11:30 pm

    Hi everyone, Can you please recommend me good practice tests that are similar to one you’d have on a real exam? I’ve tried one on Udemy and it was too hard and off comparing to Google videos and Google sample tests. TIA submitted by /u/FightForYourDreams [link] [comments]

  • 10 examples of scenario-based learning from AWS Training and Certification
    by Saif Altalib (AWS Training and Certification Blog) on September 13, 2022 at 7:31 pm

    Are you just getting started with your cloud learning journey and looking for opportunities to learn the fundamentals of Amazon Web Services (AWS) using training that is scenario-based? Take a look at 10 examples of training from AWS Training and Certification that provides situational, human-centered, scenario-based learning to advance your cloud knowledge.

  • Drive digital transformation, get Cloud Digital Leader certified
    by (Training & Certifications) on September 13, 2022 at 4:00 pm

    As enterprises look to accelerate cloud adoption, it is critical to not only upskill your technical talent, but to focus on skilling your non-technical teams too. Investing in your collective workforce’s cloud proficiency helps ensure you fully embrace everyone’s potential, and make the most of your cloud investment.According to research shared in a recent IDC paper1, comprehensively trained organizations saw a bigger impact vs. narrowly trained organizations, with 133% greater improvement in employee retention, a 47% reduction in business risk and a 22% increase in innovation. This is where Cloud Digital Leader training and certification comes in. Most cloud training and certification is geared toward technical cloud practitioners, leaving non-technical (tech-adjacent) teams with little understanding of cloud technologies. Cloud Digital Leader bridges this gap, providing easy-to-understand training that enables everyone to understand the capabilities of cloud so that they can contribute to digital transformation in their organizations.In  a recent fireside chat with Google Cloud Partner Kyndryl, who have achieved over 1,000 Cloud Digital Leader certifications across their organization, they shared how the Cloud Digital Leader training and certification has led to significant time reduction within their pre-sales cycle:“Our sales teams who work with customers and learn about their challenges were able to apply the know-how from their Cloud Digital Leader education and certification. They can now guide the technical solution teams in the right direction, without having to pull them into the discovery phases of their customer interactions. As a result, we operated more quickly and efficiently,  as the sales teams were able to speak to the Google Cloud solutions very early on in the sales cycle. This accelerated the sales process, as the sales teams were therefore more confident in their Google Cloud knowledge, saving time and money for us, and the customer.” — Christoph Schwaiger, Google Cloud Business Development Executive, Global Strategic Alliances, Kyndryl.Empower your team’s cloud fluency, and discover your next phase of digital transformation. Invite your teams to jump start their cloud journey with no-cost Cloud Digital Leader training on Google Cloud Skills Boost.Join our live webinar to access a time-limited certification offerRegister for our upcoming webinar, “Getting started with Google Cloud Digital Leader training and certification” to learn more. Those that register for the webinar before broadcast on September 15, 9am PT will get access to a time-limited discount voucher for the Cloud Digital Leader certification exam. That’s an offer that you won’t want to miss.1. IDC Paper, sponsored by Google Cloud Learning: "To Maximize Your Cloud Benefits, Maximize Training" - Doc #US48867222, March 2022Related ArticleTrain your organization on Google Cloud Skills BoostTo help more than 40 million people build cloud skills, Google Cloud has launched new enterprise level features on Google Cloud Skills Bo...Read Article

  • Thank you Partners for three years of growth and winning together
    by (Training & Certifications) on September 12, 2022 at 10:00 am

    Congratulations to our fast growing ecosystem of global partners for three years of commitment to Partner Advantage, underscored by great collaboration, high energy, innovative ideas, and transformative impact. Together we’ve leveraged our program to drive growth and customer satisfaction. Year to date, there has been more than a 140% year-over-year increase in experts at our partner organizations trained (devs, technical, certifications, solutions) for 2022. This has translated into thousands of happy customers, many of whose stories are available to read in our Partner Directory. Each of you continue to inspire our shared customers and all of us at Google Cloud. And we are only getting started!We are hard at work making sure every aspect of your business with Google Cloud is smooth running, easy to navigate, and profitable. So what’s in store for 2023? Here’s a sneak peak: Expect to see more activity and focus around our Differentiation Journey as a vehicle for driving your growth and success. This includes encouraging partners to offer more in the area of high value and repeatable services, where the opportunity is large and growing fast. You can learn more about the global economic impact our partners are having in this blog post.You’ll also see Partner Advantage focusing more on solutions and customer transformation. All of which will include corresponding incentives, new benefits, and more features.Thank you again for your commitment and hard work. It’s been a fantastic three years of amazing opportunity and growth. Not a partner yet? Start your journey today!The best is yet to come!-Nina HardingRelated ArticleRead Article

  • Doing the Cloud Guru CDL course; are the Labs really necessary?
    by /u/ExNihilo_01 (Google Cloud Platform Certification) on September 11, 2022 at 5:32 pm

    I did the AWS CCP with little prep, and no cloud guru course. I just took notes from this 4 hour video. I'm currently almost finished the Cloud Guru CDL course, but have not done any of the labs. Are they useful or necessary? submitted by /u/ExNihilo_01 [link] [comments]

  • Find a role in the cloud—even if you’re not technical
    by Cristina Vargas (AWS Training and Certification Blog) on September 8, 2022 at 8:21 pm

    Curious about the options for a job in cloud? It's no longer limited to individuals in historically technical roles. Explore your options at our free Cloud Career Exploration Day event on September 14, 2022 at the AWS Skills Center in Seattle and virtually. We’ll explore what the cloud is, what you need to know, what job roles are available, and why so many employers are hiring people with cloud skills.

  • GCP Cloud Engineer - practice tests
    by /u/Monurmac (Google Cloud Platform Certification) on September 8, 2022 at 10:11 am

    Hi, I am preparing for the Cloud Engineer associate exam, and considering some practice tests. I am considering Udemy: https://www.udemy.com/course/google-certified-associate-cloud-engineer-practice-tests-x/ Or https://www.udemy.com/course/google-cloud-associate-cloud-engineer-practice-examspractice-exams/ Which one should I choose, and would it be sufficient, if I learn the concept behind the questions in the mentioned links? Thanks in advance 🙂 submitted by /u/Monurmac [link] [comments]

  • When cloud curiosity leads to a career transformation
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on September 6, 2022 at 7:17 pm

    Are you currently in a technical role but don’t have much experience with the cloud? Learn how one learner with prior app developer experience utilized instructor-led training courses through AWS Training and Certification to grow cloud expertise, earn three AWS Certifications, and transition into a role as a solutions architect.

  • Four non-traditional paths to a cloud career (and how to navigate them)
    by (Training & Certifications) on September 6, 2022 at 4:00 pm

    One thing I love about cloud is that it’s possible to succeed as a cloud engineer from all kinds of different starting points. It’s not necessarily easy; our industry remains biased toward hiring people who check a certain set of boxes such as having a university computer science degree. But cloud in particular is new enough, and has such tremendous demand for qualified talent, that determined engineers can and do wind up in amazing cloud careers despite coming from all sorts of non-traditional backgrounds.But still - it’s scary to look at all the experienced engineers ahead of you and wonder “How will I ever get from where I am to where they are?”A few months ago, I asked some experts at Google Cloud to help me answer common questions people ask as they consider making the career move to cloud. We recorded our answers in a video series called Cracking the Google Cloud Career that you can watch on the Google Cloud Tech YouTube channel. We tackled questions like…How do I go from a traditional IT background to a cloud job?You have a superpower if you want to move from an old-school IT job to the cloud: You already work in tech! That may give you access to colleagues and situations that can level up your cloud skills and network right in your current position. But even if that’s not happening, you don’t have to go back and start from square one. Your existing career will give you a solid foundation of professional experience that you can layer cloud skills on top of. Check out my video to see what skills I recommend polishing up before you make the jump to cloud interviews:How do I move from a help desk job to a cloud job?The help desk is the classic entry-level tech position, but moving up sometimes seems like an insurmountable challenge. Rishab Kumar graduated from a help desk role to a Technical Solutions Specialist position at Google Cloud. In his video, he shares his story and outlines some takeaways to help you plot your own path forward.Notably, Rishab calls out the importance of building a portfolio of cloud projects: cloud certifications helped him learn, but in the job interview he got more questions about the side projects he had implemented. Watch his full breakdown here:How do I switch from a non-technical career to the cloud?There’s no law that says you have to start your tech career in your early twenties and do nothing else for the rest of your career. In fact, many of the strongest technologists I know came from previous backgrounds as disparate as plumbing, professional poker, and pest control. That’s no accident: those fields hone operational and people skills that are just as valuable in cloud as anywhere else. But you’ll still need a growth mindset and lots of learning to land a cloud job without traditional credentials or previous experience in the space. Google Cloud’s Stephanie Wong came to tech from the pageant world and has some great advice about how to build a professional network that will help you make the switch to a cloud job. In particular, she recommends joining the no-cost Google Cloud Innovators program, which gives you inside access to the latest updates on Google Cloud services alongside a community of fellow technologists from around the globe.Stephanie also points out that you don’t have to be a software engineer to work in the cloud; there are many other roles like developer relations, sales engineers and solutions architects that stay technical and hands-on without building software every day.You can check out her full suggestions for transitioning to a tech career in this video:How do I get a job in the cloud without a computer-related college degree?No matter your age or technical skill level, it can be frustrating and intimidating to see role after role that requires a bachelor’s degree in a field such as IT or computer science. I’m going to let you in on a little secret: once you get that first job and add some experience to your skills, hardly anybody cares about your educational background anymore. But some recruiters and hiring managers still use degrees as a shortcut when evaluating people for entry-level jobs.Without a degree, you’ll have to get a bit creative in assembling credentials. First, consider getting certified. Cloud certifications like the Google Cloud Associate Cloud Engineer can help you bypass degree filters and get you an interview. Not to mention, they’re a great way to get familiar with the workings of your cloud. Google Cloud’s Priyanka Vergadia suggests working toward skill badges on Google Cloud Skills Boost; each skill badge represents a curated grouping of hands-on labs within a particular technology that can help you build momentum and confidence toward certification.Second, make sure you are bringing hands-on skills to the interview. College students do all sorts of projects to bolster their education. You can do this too - but at a fraction of the cost of a traditional degree. As Priyanka points out in this video, make sure you are up to speed on Linux, networking, and programming essentials before you apply:No matter your background, I’m confident you can have a fulfilling and rewarding career in cloud as long as you get serious about these two things:Own your credibility through certification and hands-on practice, andBuild strong connections with other members of the global cloud community.In the meantime, you can watch the full Cracking the Google Cloud Career playlist on the Google Cloud Tech YouTube channel. And feel free to start your networking journey by reaching out to me anytime on Twitter if you have cloud career questions - I’m happy to help however I can.Related ArticleShow off your cloud skills by completing the #GoogleClout weekly challengeComplete the weekly #GoogleClout challenge and show off your cloud skillsRead Article

  • Earn new badges by building your cloud storage knowledge
    by Jennifer Ricciuti (AWS Training and Certification Blog) on September 1, 2022 at 5:58 pm

    AWS Training and Certification provides flexible Storage learning plans through AWS Skill Builder to help you build in-demand cloud storage knowledge by progressing from foundational to advanced concepts. AWS Storage digital learning badges are available to showcase your knowledge once you score 80% or higher on the associated online assessment for any of these learning plans.

  • Passed GCP Professional Cloud Architect
    by /u/electricninja911 (Google Cloud Platform Certification) on September 1, 2022 at 6:25 am

    First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation. I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture. In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information. As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam. I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination. One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well. I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS. All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years. submitted by /u/electricninja911 [link] [comments]

  • AWS Certified Solutions Architect – Associate exam updated to align with latest trends and innovations
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on August 31, 2022 at 8:48 pm

    The AWS Certified Solutions Architect - Associate exam is updated as of August 30, 2022. The updates reflect increased enterprise demand for optimizations in fast-changing areas such as security, resiliency, data volume, cost optimization, and the design of high-performance systems. Learn about the changes and how to prepare for the new exam.

  • New courses and updates from AWS Training and Certification in August 2022
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on August 30, 2022 at 6:14 pm

    Check out the latest courses, offerings, and updates for cloud learners, AWS customers, and AWS Partners in August 2022 from Amazon Web Services (AWS) Training and Certification.

  • Passed my GCP ACE Exam
    by /u/depictureboy2 (Google Cloud Platform Certification) on August 18, 2022 at 4:33 pm

    Glad I took it at a center. They kept losing connection, glad I didn't have to deal with that headache. ​ Now just to wait for the official word. Onto the Professional Data Engineer submitted by /u/depictureboy2 [link] [comments]

  • Steps to implement cloud skilling programs to jump-start digital transformation
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on August 17, 2022 at 3:22 pm

    As more organizations are investing in digital transformation to accelerate business growth and competitive positioning, they’re quickly finding they lack the necessary cloud-skilled talent to help reach their cloud goals. This blog post shares actionable steps leaders can take now to stand up cloud skills training programs to support their cloud business goals, retain valuable talent, and become an organization where skills development is the key ingredient to innovation.

  • Job Opportunities from Associate to Cloud Architect
    by /u/chancemuse (Google Cloud Platform Certification) on August 16, 2022 at 11:25 pm

    Hello, I'm currently certified as an Associate Cloud Engineer and have been studying for the Cloud Architect exam. I wanted to ask to see what changed in terms of interview call backs or follow ups once you moved from the Associate level cert to the Professional level. submitted by /u/chancemuse [link] [comments]

  • A visual tour of Google Cloud certifications
    by (Training & Certifications) on August 16, 2022 at 4:00 pm

    Interested in becoming Google Cloud certified? Wondering which Google Cloud certification is right for you? We’ve got you covered.Check out the latest#GCPSketchnote illustration, a framework to help you determine which Google Cloud certification is best suited to validate your current skill set and propel you toward future cloud career goals.Follow the arrows to see where you land, and for tips on how to prepare for your certification on Google Cloud Skills Boost: Cloud Digital Leader-This certification is for anyone who wishes to demonstrate their knowledge of cloud computing basics and how Google Cloud products and services can be used to achieve an organization’s goals.Associate Cloud Engineer - This certification is for candidates who have a solid understanding of Google Cloud fundamentals and experience deploying cloud applications, monitoring operations, and managing cloud enterprise solutions.Professional Google Cloud certifications - These certifications are ideal for candidates with in-depth experience working hands-on setting up cloud environments for organizations based on their business needs, and have experience deploying services and solutions.Professional Cloud ArchitectProfessional Cloud DeveloperProfessional Data EngineerProfessional Cloud Database EngineerProfessional DevOps EngineerProfessional Machine Learning EngineerProfessional Network EngineerProfessional Security EngineerProfessional Workspace Administrator Continue along the arrows for tips on how to prepare for your certification, while earning completion badges and skill badges through our on-demand learning platform, Google Cloud Skills Boost along the way.Where will your certification journey take you?Get started preparing for your certification today. New users are eligible for a 30-day no-cost trial on  Google Cloud Skills Boost.Related ArticleMeet the new Professional Cloud Database Engineer certificationGoogle Cloud launches a new Professional certification.Read Article

  • Google Cloud - Certification Journey
    by /u/johnbulla (Google Cloud Platform Certification) on August 14, 2022 at 8:31 pm

    Dear community, From September, the new 7-week Certification Journey Program will give you all the resources and support you need to become fully certified as a Cloud Architect, Associate Cloud Engineer or Data Engineer. #Free 🔗More Info: http://wiki-cloud.co/en/2022/08/google-cloud-certification-journey-2022 #GoogleCloud #Certification https://preview.redd.it/7swwya0roqh91.png?width=780&format=png&auto=webp&s=ab6442a95c1490eb221e896fd490a4cd6e8e240a submitted by /u/johnbulla [link] [comments]

  • Passed GCP: Cloud Digital Leader
    by /u/Kalad1nBrood (Google Cloud Platform Certification) on August 12, 2022 at 11:38 pm

    Hi everyone, First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience. Preparation I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet. AGU also has three - 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren't that hard). I don't know if someone could pass the test if they just watched the videos on Google Cloud's certification site, especially if you had no experience with GCP. Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I'm working on my CCSP. After taking the test, I realized I way over prepared. Exam Center It was my first time at this testing center and I wasn't happy with the experience. A few of the issues I had are: - My personal items (phone, keys) were placed in an unlocked filing cabinet - My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space - The laminated sheet they gave me looked like someone had spilled Kool-Aid on it - They only offered earplugs, instead of noise cancelling headphones Exam My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do. ​ I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer. submitted by /u/Kalad1nBrood [link] [comments]

  • Official GCP study guide print quality?
    by /u/steffi8 (Google Cloud Platform Certification) on August 11, 2022 at 11:25 pm

    How is the print quality with the official GCP study guide series? Is it better than Microsoft’s Azure series as those are pretty bad. submitted by /u/steffi8 [link] [comments]

  • Resources for Google Cloud
    by /u/Khaotic_Kernel (Google Cloud Platform Certification) on August 11, 2022 at 7:22 pm

    Useful Tools and Learning Resources for Google Cloud. Table of Contents Google Cloud Learning Resources Developer Resources GCP Training & Courses GCP Books Google Cloud Tools submitted by /u/Khaotic_Kernel [link] [comments]

  • Please help me decide, new graduate deciding his cloud journey.
    by /u/Themotionalman (Google Cloud Platform Certification) on August 11, 2022 at 9:40 am

    Hey so I would like to start my cloud journey. I just graduated in may and I’m lucky to have gotten a job in June. The company uses AWS the problem is I don’t see myself staying here long term. I currently do some front and back but I’d like to move more into cloud maybe later in life. I have used AWS and GCP on some personal projects nothing too intense. So I’d consider myself a noob. Here are my questions. How much preparation do you think is required for the cloud practitioner? After the CP I’d like to get a solutions architect dev architect I think and afterwards get the security specialist. Is this plan feasible, what would you change ? Last question something that might trigger a few, I’m afraid of vendor lock in, I know my company uses AWS and the best move to take would be to stick with them but here in France I see that GCP is more lucrative I am thinking of maybe doing just CP from AWS and pivoting, what do you think about this strategy, if any, what advice would you give to help me pick a platform or maybe you could tell me why you chose what you chose thanks in advance submitted by /u/Themotionalman [link] [comments]

  • Passed my Cloud Digital Leader!
    by /u/Spirited_Chipmunk_46 (Google Cloud Platform Certification) on August 11, 2022 at 2:51 am

    …well according to the preliminary results. Now to keep up the momentum. submitted by /u/Spirited_Chipmunk_46 [link] [comments]

  • Pluralsight/Udemy or Cloud Academy?
    by /u/steffi8 (Google Cloud Platform Certification) on August 9, 2022 at 9:21 pm

    For the following certifications what is recommended? Is one of the above suitable pass the Data Engineer/Cloud Architect/Cloud Engineer certifications? I have all 3 available to me with Dan Sullivans courses from Udemy. Furthermore, are Dan's books worthwhile if you're also following the online content? submitted by /u/steffi8 [link] [comments]

  • Hey in your words why pick GCP over AWS to learn, or at least the certs.
    by /u/Themotionalman (Google Cloud Platform Certification) on August 9, 2022 at 12:24 am

    submitted by /u/Themotionalman [link] [comments]

  • hi
    by /u/Logical-Neck-1131 (Google Cloud Platform Certification) on August 8, 2022 at 5:37 pm

    submitted by /u/Logical-Neck-1131 [link] [comments]

  • which gcp storage can scale to higher database sizes?
    by /u/lindogamaton (Google Cloud Platform Certification) on August 7, 2022 at 3:12 pm

    The answer is SQL Spanner. why is that? submitted by /u/lindogamaton [link] [comments]

  • passed GCP ACE today
    by /u/pulse008 (Google Cloud Platform Certification) on August 6, 2022 at 9:29 am

    submitted by /u/pulse008 [link] [comments]

  • ML Engineers: Partners for Scaling AI in Enterprises
    by (Training & Certifications) on August 4, 2022 at 4:00 pm

    Enterprises across many industries are adopting artificial intelligence (AI) and machine learning (ML) at a rapid pace. Many factors fuel this accelerated adoption, including a need to realize value out of the massive amounts of data generated by multichannel customer interactions and the increasing stores of data from all facets of an enterprise's operations. This growth prompts a question: what knowledge and skill sets are needed to help organizations leverage and scale AI and ML? To answer this question, it’s important to understand what types of transformations enterprises are going through as they aim to make better use of their data.Growing AI/ML Maturity Many large organizations have moved beyond pilot or sample AI/ML use cases within a single team to figuring out how to solidify their data science projects and scale them to other areas of the business. As data changes or gets updated, organizations need ways to continually optimize the outcomes from their ML models. Mainstreaming Data Science Data science has moved into the mainstream of many organizations. People working in various line-of-business teams — such as product, marketing and supply chain — are eager to apply predictive analytics. With this growth, decentralized data science teams are popping up all over a single enterprise. But many people looking to apply predictive techniques have limited training in data science or limited knowledge of the infrastructure fundamentals for production-scale AI/ML. Additionally, enterprises are faced with a proliferation of ad hoc technologies, tools and processes.  Increasing Complexity of Data Having achieved some early wins, often with structured or tabular data use cases, organizations are eager to derive value out of the massive amounts of unstructured data, including from language, vision, natural language and other categories. One role that organizations are increasingly turning to is the ML engineer.  What is a Machine Learning Engineer?I have observed that as organizations mature in their AI/ML practices, they expand from hiring mainly data scientists toward hiring people with ML engineering skills. A review of hundreds of ML engineer job postings sheds light on why this role is one way to meet the transformative needs of the enterprise. Examining the frequency of certain terms in the free text of the job postings surfaces several themes:SOFTWARE ENGINEERINGML engineers are closely affiliated with the software engineering function. Organizations hiring ML engineers have typically achieved some wins in their initial AI/ML pilots and they are moving up the ML adoption curve from implementing ML use cases to scaling, operationalizing and optimizing ML in their organizations. Many job postings emphasize the software engineering aspects of ML over the pure data science skills. ML engineers need to apply software engineering practices and write performant production-quality code. DATAEnterprises are looking for people with the ability to create pipelines or reusable processes for various aspects of ML workflows. This involves both collaborating with data engineers (another in-demand role) and creating the infrastructure for robust data practices throughout the end-to-end ML process. In other words, ML engineers create processes and partnerships to help with cleaning, labeling and working with large scale data from across the enterprise. PRODUCTIONMany employers look for ML engineers who have experience with the end-to-end ML process, especially taking ML models to production. ML engineers work with data scientists to productionize their work, building pipelines for continuous training, automated validation and version control of the model.  SYSTEMSMany ML engineers are hired to help organizations put the architecture, systems and best-practices in place to take AI/ML models to production. ML engineers deploy ML models to production either on cloud environments or on-premise infrastructure. The emphasis on systems and best practices helps to drive consistency as people with limited data science or infrastructure fundamentals learn to derive value from predictive analytics. This focus on systematizing AI/ML is also a critical prerequisite for developing an AI/ML governance strategy. This qualitative analysis of ML Engineering jobs is not based on an assessment of a specific job posting or even one specific to the enterprise I work in. Rather, it reflects a qualitative evaluation of general themes across the spectrum of publicly available job postings for ML engineers—a critical role for enterprises to scale AI/ML.In what teams do ML Engineers work?Within enterprises, ML engineers reside in a variety of teams, including data science, software engineering, research and development, product groups, process/operations and other business units.What industries seek talent to help productionize ML?While demand for ML engineers is at an all-time high, there are several industries that are at the forefront of hiring these roles. The industries with the highest demand for ML engineers include  computers and software, finance and banking and professional services. As AI and ML continue to grow and mature as a practice in enterprises, ML engineers play a pivotal role in helping to scale AI/ML usage and outcomes. ML engineers enable data scientists to focus on what they do best by establishing infrastructure, processes and best practices to realize business value from AI/ML models in production. This is especially the case as data volumes and complexity grows. Where to begin with building AI and ML skills? Google Cloud Skills Boost offers a number of courses that can help your teams build ML engineering skills on their path to achieving the Professional Machine Learning Engineer certification. To learn more about how Google Cloud products and services empower enterprises to do more with AI and ML, visit our AI and ML products page or read this blog post about some of our top resources for getting started with Google Cloud services like Vertex AI, our machine learning platform built for the needs of ML engineers. For the latest from Google Cloud ML experts and customers, check out on-demand sessions from our Applied ML Summit to get a firsthand look at additional learning events for you and your teams.Related ArticleSharpen your machine learning skills at Google Cloud Applied ML SummitImprove your machine learning skills and learn from leading experts at Google Cloud Applied ML Summit, coming June 9, 2022.Read Article

  • Azure Firewall Premium is now ICSA labs certified
    by Azure service updates on August 2, 2022 at 7:00 pm

    Azure Firewall Premium Intrusion Prevention System (IPS) certification from ICSA Labs is now available.

  • GCP ML Engineer Learning in a short time
    by /u/vrajjshah (Google Cloud Platform Certification) on July 29, 2022 at 3:34 pm

    I know Dump is not the right way to learn. But my workplace has given me a short deadline yesterday to complete ML Engineer Certificate by Monday. The Examtopics Dump is not available for this exam(gives 404 Error) - 404 - Page not found (examtopics.com). It is available through VPN but has only 66 questions! I did not find a course on Udemy too. Any other resources for learning it in a short time ( weekened)? Or even any other popular dumps website ( Tried and Tested). Thanks in Advance submitted by /u/vrajjshah [link] [comments]

  • Meet the new Professional Cloud Database Engineer certification
    by (Training & Certifications) on July 28, 2022 at 4:30 pm

    After a successful certification beta, we’re excited to share that the Professional Cloud Database Engineer certification is now generally available. This new certification allows you to showcase your ability to manage databases that power the world’s most demanding workloads. Traditional data management roles have evolved and now call for elevated cloud data management expertise, making this certification especially important now because 80% of IT leaders note a lack of skills and knowledge among their employees. Google Cloud certifications have proven to be critical for employees and businesses looking to adopt cloud technologies. In fact, 76% of IT decision makers agree that certifications have increased their confidence in their staff’s knowledge and ability. Certification exam tips from a beta testerThe new certification validates your ability to design, plan, test, implement, and monitor cloud databases. Plus, it also demonstrates your ability to lead database migration efforts and guide organizational decisions based on your company’s use cases.Kevin Slifer, Technical Delivery Director, Cloud Practice, EPAM Systems shares his experience in becoming a Google Cloud certified Professional Cloud Database Engineer:“Preparing for the Professional Cloud Database Engineer certification improved my proficiency in database migration and management in the cloud.  Passing the exam has enabled me to add immediate value to the organizations that I work with in navigating their database migration and modernization journeys, including my current project, which involves the adoption of Cloud SQL at scale. Candidates who are preparing for this exam should make an investment in understanding the key benefits of bringing legacy database platforms into Google-managed services like Cloud SQL and Bare Metal Solution, as well as the additional upside to going cloud-native with Google’s own database platforms like Spanner and Firestore.”Deepen your database knowledgeGet started with our recommended content to enhance your database knowledge, on your journey towards becoming a Google Cloud certified Professional Cloud Database Engineer. This is a Professional certification requiring both industry knowledge and hands-on experience working with Google Cloud databases.Start with the exam guide and familiarize yourself with the topics covered.Round out your skills by following the Database Engineer Learning Path which covers many of the topics on the exam, including migrating databases to Google Cloud and managing Google Cloud databases.Gain hands-on practice by earning the skill badges in the learning path:Create and Manage Cloud Spanner Databases  Manage Bigtable on Google Cloud Migrate MySQL data to Cloud SQL using Database Migration Service Manage PostgreSQL Databases on Cloud SQL Don’t skip the additional resources to help you prepare for the exam, such as:Your Google Cloud database options, explainedDatabase modernization solutions Database migration solutions Register for the exam! Mark Your CalendarsRegister for our upcoming Cloud OnAir webinar on August 4, 2022 at 9am PT featuring Mara Soss, Credentials and Certification Engagement Lead and Priyanka Vergadia, Google Cloud Staff Developer Advocate, as they dive into the new certification, how to best prepare, and they will take your questions live.Related ArticleWhy IT leaders choose Google Cloud certification for their teamsWhy IT leaders should choose Google Cloud training and certification to increase staff tenure, improve productivity for their teams, sati...Read Article

  • developer in ERP system for 10 years, new to GCP, which Google Cloud Cert should I pursue?
    by /u/lindogamaton (Google Cloud Platform Certification) on July 27, 2022 at 10:52 pm

    Hi,I am new to GCP. I have experience in python, sso, windows, sql in past 10 years. I have experience building web application on top of ERP, build some tutorial app on aws for weekend project, build integration, draw system diagram, and some of my client use datalake (all I do is to provide data through integration, but no knowledge of datalake). In my ERP word, I am at architect level with hands on coding experience. And I'd like to pursue GCP Cert: associate cloud engineer pro data engineer pro cloud developer pro cloud architect My goal is to get prod cloud architect, is it a mission impossible for 1st GCP. Should I take pro cloud developer path, build some web app, gain experience, then consider pro cloud architect next year? Thank you! submitted by /u/lindogamaton [link] [comments]

  • #GCP #GoogleCloud need to read flat file delimeted and push data to CloudSQL. google dataflow, right service ?
    by /u/Electronic-Region834 (Google Cloud Platform Certification) on July 27, 2022 at 2:40 am

    submitted by /u/Electronic-Region834 [link] [comments]

  • ACE GCP EXAM
    by /u/Mell-O_5751 (Google Cloud Platform Certification) on July 25, 2022 at 9:30 pm

    Hey, i just passed the ACE exam on 6th July but didnt received any confirmation mail yet by Google. I send many emails but nothing… how many days can be delayed? 20 days and counting. submitted by /u/Mell-O_5751 [link] [comments]

  • Whizlabs for GCP ACE?
    by /u/hpoddar2810 (Google Cloud Platform Certification) on July 25, 2022 at 6:33 am

    I am preparing for GCP ACE . I already have the whizlabs practice test but haven't given yet. So I wants to ask is it enough or I should also look for some other pratise tests. submitted by /u/hpoddar2810 [link] [comments]

  • Prepare for Google Cloud certification with top tips and no-cost learning
    by (Training & Certifications) on July 11, 2022 at 4:30 pm

    Becoming Google Cloud certified has proven to improve individuals’ visibility within the job market, and demonstrate ability to drive meaningful change and transformation within organizations.  1 in 4 Google Cloud certified individuals take on more responsibility or leadership roles at work, and  87% of Google Cloud certified users feel more confident in their cloud skills1.75% of IT decision-makers are in need of technologically-skilled personnel to meet their organizational goals and close skill gaps2.94% of those decision-makers agree that certified employees provide added value above and beyond the cost of certification3.Prepare for certification with a no-cost learning opportunityThat's powerful stuff, right?  That’s why we've teamed up with Coursera to support your journey to becoming Google Cloud certified.As a new learner, get one month of no-cost access to your selected Google Cloud Professional Certificate on Coursera to help you prepare for the relevant Google Cloud certification exam. Choose from Professional Certificates in data engineering, cloud engineering, cloud architecture, security, networking, machine learning, DevOps and for business professionals, the Cloud Digital Leader.Become Google Cloud certifiedTo  help you on your way to becoming Google Cloud certified, you can earn a discount voucher on the cost of the Google Cloud certification exam by completing the Professional Certificate on Coursera by August 31, 2022 Simply visit our page on Coursera and start your one month no-cost learning journey today. Top tips to prepare for your Google Cloud certification examGet hands-on with Google CloudFor those of you in a technical job role, we recommend leveraging the Google Cloud projects to build your hands-on experience with the Google Cloud console. With 500+ Google Cloud projectsnow available on Coursera, you can gain hands-on experience working in the real Google Cloud console, with no download or configuration required.Review the exam guideExam guides provide the blueprint for developing exam questions and offer guidance to candidates studying for the exam. We´d encourage you to be prepared to answer questions on any topic in the exam guide, but it's not guaranteed that every topic within an exam guide will be assessed.Explore the sample questionsTaking a look at the sample questions on each certification page will help to familiarize you with the format of exam questions and example content that may be covered. Start your certification preparation journey today with a one month no-cost learning opportunity on Coursera. Want to know more about the value of Google Cloud Certification? Find out why IT leaders choose Google Cloud Certification for their teams.1. Google Cloud, Google Cloud certification impact report, 20202. Skillsoft Global Knowledge, IT skills and Salary report, 20213. Skillsoft Global Knowledge, IT skills and Salary report, 2021Related ArticleWhy IT leaders choose Google Cloud certification for their teamsWhy IT leaders should choose Google Cloud training and certification to increase staff tenure, improve productivity for their teams, sati...Read Article

  • Investing in Differentiation brings great customer experiences and repeatable business
    by (Training & Certifications) on July 7, 2022 at 4:00 pm

    “Customer success is the cornerstone of our partner ecosystem and ensures our joint customers experience the innovation, faster time to value, and top notch skills from Google and Google Cloud Partners.”—Nina Harding, Global Chief, Partner Advantage Program.Our ecosystem is a strong, validated ally to help you drive business growth and solve complex challenges. Differentiation achievements help you select a partner with confidence, knowing that Google Cloud has verified their skills and customer success across our products, horizontal solutions and key industries.  In all cases, our partners have demonstrated their commitment to learning and ongoing training, demonstrated through earned certifications, Specialization and Expertise. To further refine the process of helping customers find the best partner fast, we recently introduced Net Promoter Score© within Partner Advantage.  This industry standard rating tool allows customers to provide feedback and insights on their successes with partners quickly and easily. We encourage you to work with your partners to share your success and provide feedback using Net Promoter Score.To find the most highly qualified, experienced partners the Google Cloud Partner Directory puts you in the driver’s seat. This purpose-built tool helps customers like you leverage partner Differentiation achievements to move forward with confidence as you start your next project.This new “How to find the right Google Cloud Partner” video shows you how to create a shortlist of potential partners by Region, and based on 14 different strategic solution categories or 100+ Expertise designations.To find a partner that meets your specific needs, or complements your capable team, look no further than Partner Advantage’s Differentiation framework and share in our congratulations to some partners that have achieved Specialization the past few quarters.Related ArticleStanding out to customers through the Partner Differentiation journeyLearn how Google Cloud Partner Advantage partners help customers solve real-world business challengesRead Article

  • Show off your cloud skills by completing the #GoogleClout weekly challenge
    by (Training & Certifications) on July 6, 2022 at 4:00 pm

    Who’s up for a challenge? It’s time to show off your #GoogleClout!Starting today, check in every Wednesday to unlock a new cloud puzzle that will test your cloud skills against participants worldwide. Stephanie Wong’s previous record is 5 minutes, can you complete the new challenge in 4?#GoogleClout ChallengeThe #GoogleClout challenge is a no-cost weekly 20 minute hands-on challenge. Every Wednesday for the next 10 weeks, a new challenge will be posted on our website. Participants will race against the clock to see how quickly they can complete the challenge. Attempt the 20 minute challenge as many times as you want. The faster you go, the higher your score!How it worksTo participate, follow these four simple steps:Enroll - Go to our website, click the link to the weekly challenge, and enroll in the quest using your Google Cloud Skills Boost account. Play - Attempt the challenge as many times as you want. Remember the faster you are, the higher your score!Share - Share your score card on Twitter/LinkedIn using #GoogleCloutWin - Complete all 10 weekly challenges to earn exclusive #GoogleClout badgesReady to get started?Take the #GoogleClout challenge today!Related ArticleEarn Google Cloud swag when you complete the #LearnToEarn challengeEarn swag with the Google Cloud #LearnToEarn challengeRead Article

  • Earn Google Cloud swag when you complete the #LearnToEarn challenge
    by (Training & Certifications) on June 27, 2022 at 4:00 pm

    The MLOps market is expected to grow to around $700m by 20251. With the Google Cloud Professional Data Engineer certification topping the list of highest paying IT certifications in 20212, there has never been a better time to grow your data and ML skills with Google Cloud. Introducing the Google Cloud #LearnToEarn challenge Starting today, you’re invited to join the data and ML #LearnToEarn challenge- a high-intensity workout for your brain.  Get the ML, data, and AI skills you need to drive speedy transformation in your current and future roles with no-cost access to over 50 hands-on labs on Google Cloud Skills Boost. Race the clock with players around the world, collect badges, and earn special swag! How to complete the #LearnToEarn challenge?The challenge will begin with a core data analyst learning track. Then each week you’ll get new tracks designed to help you explore a variety of career paths and skill sets. Keep an eye out for trivia and flash challenges too!  As you progress through the challenge and collect badges, you’ll qualify for rewards at each step of your journey. But time and supplies are limited - so join today and complete by July 19! What’s involved in the challenge? Labs range from introductory to expert level. You’ll get hands-on experience with cutting edge tech like Vertex AI and Looker, plus data differentiators like BigQuery, Tensorflow, integrations with Workspace, and AutoML Vision. The challenge starts with the basics, then gets gradually more complex as you reach each milestone. One lab takes anywhere from ten minutes to about an hour to complete. You do not have to finish all the labs at once - but do keep an eye on start and end dates. Ready to take on the challenge?Join the #LearnToEarn challengetoday!1. IDC, Market Analysis Perspective: Worldwide AI Life-Cycle Software, September 20212. Skillsoft Global Knowledge, 15 top-paying IT certifications list 2021, August 2021

  • General availability: Edge Secured-Core for Windows IoT
    by Azure service updates on June 22, 2022 at 4:00 pm

    Edge Secured-Core is a certification program that extends the Secured-Core label into IoT and Edge devices.

  • Google helps Indonesia advance education on cloud, machine learning, and mobile development through Bangkit academy
    by (Training & Certifications) on June 16, 2022 at 4:00 pm

    Indonesia is leading the way for digital transformation in Southeast Asia. According to Google’s e-Conomy South East Asia report, the country’s 2030 Gross Merchandise Value - the value of online retailing to consumers -  could be twice the value of the whole of Southeast Asia today.  This growth means that many companies need more qualified IT graduates and employees with digital skills than they have today. Fast-growing tech companies need more qualified IT graduates, and employees with digital skills. According to the World Bank, Indonesia needs an additional nine million people with digital skills by 2030. The shortage of technical talent reiterates the need to invest in a reliable skills pipeline. Following years of digital talent developments in Indonesia, Google has become a supporter of Bangkit, an academy designed to produce high-caliber technical talent for Indonesian technology companies and startups. Bangkit has facilitated a multi-stakeholder collaboration between Google, government, industry, and universities across Indonesia. Last year, the President of Indonesia and the Ministry of Education and Culture, Research, and Technology, acknowledged Bangkit’s significant impact, with 3,000 students completing nearly 15,000 courses and specialisations. Building on last year’s success, Bangkit started its 2022 program in February, offering three learning paths to students:Cloud computing with Google Cloud, preparing students for the Google Associate Cloud Engineer certification. Some of the course components are also available online Mobile development with Android, preparing students for the Google Associate Android Developer exam. An online version is available here. Machine learning with Tensorflow, getting students ready to take the Tensorflow Developer certification. Some of the online courses are available here for others.Bangkit 2022 has enrolled 3,100 university students who will take a five month study course, obtaining university study credit, as well as industry certifications. The program accepts diverse cohorts of people who are passionate about preparing for a tech career in the near future, with support and encouragement for women, people with disabilities, and students from across Indonesia to apply. Since its pilot in 2019, Bangkit has been guided by three principles:  Industry-led: provides curriculum and instructors from industry experts, including Google, GoTo and Traveloka. Instructors include key figures such as Laurence Moroney (Google, Lead AI Advocate), Google Developer Experts, and other committed professionals. Immersive: combines online learning methods conducted in both individual and group settings.  Interdisciplinary: contains knowledge and best practices in tech, soft skills, and English to provide complete career readiness. The program runs from February to July 2022, and has a 900-hour curriculum throughout the 18-week learning experience. Benefits for students participating in Bangkit include:Study credit conversion Job opportunities at our career fairGoogle Cloud, TensorFlow and AAD exam vouchersIncubation funds and mentorship support from industryTowards the end of Bangkit 2022, students will team up for the Capstone Project challenge to propose solutions to some of the nation’s most pressing problems, such as environmentalism, accessibility, and more. The top 15 teams will be selected to receive funding to incubate their capstone projects. These education and career-preparedness offerings are provided at no cost.Google is partnering with industry, governments, universities, and employers to help meet the skill demands of today. From supporting the State of Ohio to offer tech skills to residents, to working with the University of Minnesota-Rochester to create a customized health sciences degree program, Google is here to help our partners prepare those they serve for a cloud-first world.

  • Unveiling the 2021 Google Cloud Partner of the Year Award Winners
    by (Training & Certifications) on June 14, 2022 at 3:50 pm

    It’s time to celebrate! Join us in congratulating the 2021 Google Cloud Partner of the Year Award winners. As cloud computing and emerging technologies improve how we connect, share information, and conduct business, these partners helped customers turn challenges into opportunities. We’re proud to work alongside our partners and support customers as they innovate their businesses and accelerate their digital transformations. Congratulations to these winners for their creative spirit, collaborative drive, and customer-first approach; we are proud to recognize you and to call you our partners!Kudos to the 2021 winners:We're proud, grateful, and—above all—excited for what's next. As our network of partners continues to grow, we invite you to learn more about the Google Cloud Partner Advantage Program and how you can get involved by visiting our partner page.Related ArticleCelebrating the winners of the 2021 Google Cloud Customer AwardsCustomers have won Google Cloud Awards for innovation, excellence and transformation during another exciting year in the cloud.Read Article

  • Google Cloud supports higher education with Cloud Digital Leader program
    by (Training & Certifications) on June 8, 2022 at 4:00 pm

    College and university faculty can now easily teach cloud literacy and digital transformation with the Cloud Digital Leader track, part of the Google Cloud career readiness program. The new track is available for eligible faculty who are preparing their students for a cloud-first workforce. As part of the track, students will build their cloud literacy and learn the value of Google Cloud in driving digital transformation, while also preparing for the Cloud Digital Leader certification exam. Apply today!Cloud Digital Leader career readiness trackThe Cloud Digital Leader career readiness track is designed to equip eligible faculty with the resources needed to prepare their students for the Cloud Digital Leader certification. This Google Cloud certification requires no previous cloud computing knowledge or hands-on experience. The training path enables students to build cloud literacy and learn how to evaluate the capabilities of Google Cloud in preparation for future job roles. The curriculumFaculty members can access this curriculum as part of the Google Cloud Career Readiness program. Faculty from eligible institutions can apply to lead students through the no-cost  program which provides access to the four-course on-demand training, hands-on practice to supplement the learning, and additional exam prep resources. Students who complete the entire program are eligible to apply for a certification exam discount. The Cloud Digital Leader track is the third program available for classroom use, joining the Associate Cloud Engineer and Data Analyst tracks. Cloud resources for your classroomReady to get started? Apply today to access the Cloud Digital Leader career readiness track for your classroom. Read the eligibility criteria for faculty. You can preview the course content at no cost.Related ArticleRead Article

  • Why IT leaders choose Google Cloud certification for their teams
    by (Training & Certifications) on May 27, 2022 at 4:00 pm

    As organizations worldwide move to the cloud, it’s become increasingly crucial to provide teams with confidence and the right skills to get the most out of cloud technology. With demand for cloud expertise exceeding the supply of talent, many businesses are looking for new, cost-effective ways to keep up.When ongoing skills gaps stifle productivity, it can cost you money. In Global Knowledge’s 2021 report, 42% of IT decision-makers reported having “difficulty meeting quality objectives” as a result of skills gaps, and, in an IDC survey cited in the same Global Knowledge report, roughly 60% of organizations described a lack of skills as a cause for lost revenue. In today’s fast-paced environment, businesses with cloud knowledge are in a stronger position to achieve more. So what more could you be doing to develop and showcase cloud expertise in your organization?Google Cloud certification helps validate your teams’ technical capabilities, while demonstrating your organization’s commitment to the fast pace of the cloud.What certification offers that experience doesn’t is peace of mind. I’m not only talking about self-confidence, but also for our customers. Having us certified, working on their projects, really gives them peace of mind that they’re working with a partner who knows what they’re doing. Niels Buekers, managing director at Fourcast BVBAWhy get your team Google Cloud certified?When you invest in cloud, you also want to invest in your people. Google Cloud certification equips your teams with the skills they need to fulfill your growing business. Speed up technology implementation Organizations want to speed up transformation and make the most of their cloud investment.Nearly 70% of partner organizations recognize that certifications speed up technology implementation and lead to greater staff productivity, according to a May 2021 IDC Software Partner Survey. The same report also found that 85% of partner IT consultants agree that “certification represents validation of extensive product and process knowledge.”Improve client satisfaction and successGetting your teams certified can be the first step to improving client satisfaction and success. Research of more than 600 IT consultants and resellers in a September 2021 IDC study found that “fully certified teams met 95% of their clients’ objectives, compared to a 36% lower average net promoter score for partially certified teams.”Motivate your team and retain talentIn today’s age of the ongoing Great Resignation, IT leaders are rightly concerned about employee attrition, which can result in stalled projects, unmet business objectives, and new or overextended team members needing time to ramp up. In other words, attrition hurts.But when IT leaders invest in skills development for their teams, talent tends to stick around. According to a business value paper from IDC, comprehensive training leads to 133% greater employee retention compared to untrained teams. When organizations help people develop skills, people stay longer, morale improves, and productivity increases. Organizations wind up with a classic win-win situation as business value accelerates. Finish your projects ahead of scheduleWith your employees feeling supported and well equipped to handle workloads, they can also stay engaged and innovate faster with Google Cloud certifications. “Fully certified teams are 35% more likely than partially certified teams to finish projects ahead of schedule, typically reaching their targets more than two weeks early,” according to research in an IDC InfoBrief.Certify your teamsGoogle Cloud certification is more than a seal of approval – it can be your framework to increase staff tenure, improve productivity, satisfy your customers, and obtain other key advantages to launch your organization into the future. Once you get your teams certified, they’ll join a trusted network of IT professionals in the Google Cloud certified community, with access to resources and continuous  learning opportunities.To discover more about the value of certification for your team, download the IDC paper today and invite your teams to join our upcoming webinar and get started on their certification journey.Related ArticleHow to become a certified cloud professionalHow to become a certified cloud professionalRead Article

  • Public preview: Azure Communication Services APIs in US Government cloud
    by Azure service updates on May 24, 2022 at 4:00 pm

    Use Azure Communication Services APIs for voice, video, and messaging in US Government cloud.

  • New Research shows Google Cloud Skill Badges build in-demand expertise
    by (Training & Certifications) on May 19, 2022 at 4:00 pm

    We live in a digital world, and the future of work is in the cloud. In fact, 61% of HR professionals believe hiring developers will be their biggest challenge in the years ahead.1During your personal cloud journey, it’s critical to build and validate your skills in order to evolve with the rapidly changing technology and business landscape.That is why we created skill badges - a micro-credential issued by Google Cloud to demonstrate your cloud competencies and your commitment to staying on top of the latest Google Cloud solutions and products. To better understand the value of skills badges to holders’ career goals, we commissioned a third-party research firm, Gallup, to conduct a global study on the impact of Google Cloud skill badges. Skill badge earners overwhelmingly gain value from and are satisfied with Google Cloud skill badges.Skill badge holders state that they feel well equipped with the variety of skills gained through skill badge attainment, that they are more confident in their cloud skills, are excited to promote their skills to their professional network, and are able to leverage skill badges to achieve future learning goals, including a Google Cloud certification. 87% agree skill badges provided real-world, hands-on cloud experience286% agree skill badges helped build their cloud competencies2 82% agree skill badges helped showcase growing cloud skills290% agree that skill badges helped them in their Google Cloud certification journey274% plan to complete a Google Cloud certification in the next six months2Join thousands of other learners and take your career to the next level with Google Cloud skill badges.To learn more, download the Google Cloud Skills Badge Impact Report at no cost.1. McKinsey Digital,Tech Talent Technotics: Ten new realities for finding, keeping, and developing talent , 20222. Gallup Study, sponsored by Google Cloud Learning: "Google Cloud Skill Badge Impact report", May 2022Related ArticleHow to prepare for — and ace — Google’s Associate Cloud Engineer examThe Cloud Engineer Learning Path is an effective way to prepare for the Associate.Read Article

  • If you are looking for a Job relating to azure try r/AzureJobs
    by /u/whooyeah (Microsoft Azure Certifications) on May 5, 2022 at 10:41 am

    submitted by /u/whooyeah [link] [comments]

  • How we’re keeping up with the increasing demand for the Google Workspace Administrator role
    by (Training & Certifications) on April 29, 2022 at 4:00 pm

    We’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. To mark the moment, we sat down with Erik Geerdink from SADA to talk about how the Google Workspace Administrator role and demand for this skill set has changed over the years. Erik is a Deployment Engineer and Pod Lead. He holds a Professional Google Workspace Administrator Certificationand has worked with Google Workspace for more than six years.What was it like starting out as a Google Workspace Administrator?When I first started, I was doing Google Workspace Support as a Level 2 Administrator. At that time, there were fewer admin controls for Google Workspace. There were calendar issues, some mail routing issues, maybe a little bit of data loss prevention (DLP), but that was about it.About 5 years ago, I transferred into Google Deployment and really got to see all that went on with deploying Google Workspace and troubleshooting advanced issues. Since then, what you can accomplish in the admin console has really taken off. There’s still Gmail and Calendar configurations, but the security posture that Google offers now—they’ve really upped their game. The extent of DLP isn’t just Gmail and Drive anymore; it extends into Chat. And we’re doing a lot of Context-Aware Access to make sure users only have as much access as IT compliance allows in our deployments. Calendar interop, which allows users in different systems to see availability, has been a big area of focus as well.How has the Google Workspace Administrator role changed over the last few years? It used to be that you were a systems admin who also took care of the Google portion as well. But with Google Workspace often being the entry point to Google Cloud, we’ve had to become more knowledgeable about the platform as a whole. Now, we not only do training with Google Workspace admins for our projects, we also talk to their Google Cloud counterparts as well.Google Workspace is changing all the time, and the weekly updates that Google sends out are great. As an engineering team, every week on Wednesday, we review each Google Workspace update that’s come out to understand how they affect us, our clients, and our upcoming projects. There’s a lot to it. It’s not just a little admin role anymore. It’s a strategic technology role.What motivated you to get Google Cloud Certified?I spent the first 15 years of my career doing cold server room roles, and I knew I had to get cloudy. I wanted to work with Google, and it was a no-brainer given the organization’s reputation for innovation. I knew this certification exam was the one to get me in the door. The Professional Google Workspace Administrator certification was required to level up as an administrator and to make sure our business kept getting the most out of Google Workspace. How has the demand for certified Google Workspace Admins changed recently? Demand has absolutely gone up. We are growing so much, and we need more professionals with this certification. It’s required for all of our new hires. When I see a candidate that already has the certification, they go to the top of the list. I’ll skip all the other resumes to find someone who has this experience. We’re searching globally—not just in North America—to find the right people to fill this strategic role.Explore the new learning pathIn order to keep up with the changing demands of this role, we’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. The learning path now aligns with the improved admin console. We’ve replaced the readings with videos for a better learning experience: in total, we added 17 new videos across 5 courses to match new features and functionality. Earn the Professional Google Workspace Administrator Certification to distinguish yourself among your peers and showcase your skills.Related ArticleUnlock collaboration with Google Workspace EssentialsIntroducing Google Workspace Essentials Starter, a no-cost offering to bring modern collaboration to work.Read Article

  • General availability: Azure Database for PostgreSQL - Hyperscale (Citus) now FedRAMP High compliant
    by Azure service updates on March 30, 2022 at 4:01 pm

    Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure is now compliant with FedRAMP High.

  • General availability: Asset certification in Azure Purview data catalog
    by Azure service updates on February 28, 2022 at 5:00 pm

    Data stewards can now certify assets that meet their organization's quality standards in the Azure Purview data catalog

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 16, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • General availability: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 2, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus): New certifications
    by Azure service updates on January 19, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Azure Database for PostgreSQL – Hyperscale (Citus): New toolkit certifications generally available
    by Azure service updates on December 15, 2021 at 5:00 pm

    New Toolkit certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Azure VMware Solution achieves FedRAMP High Authorization
    by Azure service updates on September 15, 2021 at 11:53 pm

    With this certification, U.S. government and public sector customers can now use Azure VMware Solution as a compliant FedRAMP cloud computing environment, ensuring it meets the demanding standards for security and information protection.

  • Azure expands HITRUST certification across 51 Azure regions
    by Azure service updates on August 23, 2021 at 9:38 pm

    Azure expands offering and region coverage to Azure customers with its 2021 HITRUST validated assessment.

  • Azure Database for PostgreSQL - Hyperscale (Citus) now compliant with additional certifications
    by Azure service updates on June 9, 2021 at 4:00 pm

    New certifications are now available for Hyperscale (Citus) on Azure Database for PostgreSQL, a managed service running the open-source Postgres database on Azure.

  • Azure expands PCI DSS certification
    by Azure service updates on March 15, 2021 at 5:02 pm

    You can now leverage Azure’s Payment Card Industry Data Security Standard (PCI DSS) certification across all live Azure regions.

  • 172 Azure offerings achieve HITRUST certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure expands its depth of offerings to Azure customers with its latest independent HITRUST assessment.

  • Azure achieves its first PCI 3DS certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure’s PCI 3DS Attestation of Compliance, PCI 3DS Shared Responsibility Matrix, and PCI 3DS whitepaper are now available.

  • Azure Databricks Achieves FedRAMP High Authorization on Microsoft Azure Government
    by Azure service updates on November 25, 2020 at 5:00 pm

    With this certification, customers can now use Azure Databricks to process the U.S. government’s most sensitive, unclassified data in cloud computing environments, including data that involves the protection of life and financial assets.

  • New SAP HANA Certified Memory-Optimized Virtual Machines now available
    by Azure service updates on November 12, 2020 at 5:01 pm

    We are expanding our SAP HANA certifications, enabling you to run production SAP HANA workloads on the Edsv4 virtual machines sizes.

  • Azure achieves Service Organization Controls compliance for 14 additional services
    by Azure service updates on November 11, 2020 at 5:10 pm

    Azure gives you some of the industry’s broadest certifications for the critical SOC 1, 2, and 3 compliance offering, which is widely used around the world.

  • Announcing the unified Azure Certified Device program
    by Azure service updates on September 22, 2020 at 4:05 pm

    A unified and enhanced Azure Certified Device program was announced at Microsoft Ignite, expanding on previous Microsoft certification offerings that validate IoT devices meet specific capabilities and are built to run on Azure. This program offers a low-cost opportunity for device builders to increase visibility of their products while making it easy for solution builders and end customers to find the right device for their IoT solutions.

  • IoT Security updates for September 2020
    by Azure service updates on September 22, 2020 at 4:05 pm

    New Azure IoT Security product updates include improvements around monitoring, edge nesting and the availability of Azure Defender for IoT.

  • Azure Certified for Plug and Play is now available
    by Azure service updates on August 27, 2020 at 12:21 am

    IoT Plug and Play device certification is now available from Microsoft as part of the Azure Certified device program.

  • Azure France has achieved GSMA accreditation
    by Azure service updates on August 6, 2020 at 5:45 pm

    Azure has added an important compliance offering for telecommunications in France, the Global System for Mobile Communications Association (GSMA) Security Accreditation Scheme for Subscription Management (SAS-SM).

  • Azure Red Hat OpenShift is now ISO 27001 certified
    by Azure service updates on July 21, 2020 at 4:00 pm

    To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now ISO 27001 certified.

  • Azure Lighthouse updates—April 2020
    by Azure service updates on June 1, 2020 at 4:00 pm

    Several critical updates have been made to Azure Lighthouse, including FEDRAMP certification, delegation opt-out, and Azure Backup reports.

  • Azure NetApp Files—New certifications, increased SLA, expanded regional availability
    by Azure service updates on May 19, 2020 at 4:00 pm

    The SLA guarantee for Azure NetApp Files has increased to 99.99 percent. In addition, NetApp Files is now HIPAA and FedRAMP certified, and regional availability has been increased.

  • Kubernetes on Azure Stack Hub in GA
    by Azure service updates on February 25, 2020 at 5:00 pm

    We now support Kubernetes cluster deployment on Azure Stack Hub, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS Engine on Azure Stack Hub.

  • Azure Firewall Spring 2020 updates
    by Azure service updates on February 19, 2020 at 5:00 pm

    Excerpt: Azure Firewall is now ICSA Labs certified. In addition, several key Azure Firewall capabilities have recently been released into general availability (GA) and preview.

  • Azure IoT C# and Java SDKs release new long-term support (LTS) branches
    by Azure service updates on February 14, 2020 at 5:00 pm

    The Azure IoT Java and C# SDKs have each now released new long-term support (LTS) branches.

  • HPC Cache receives ISO certifications, adds stopping feature, and new region
    by Azure service updates on February 11, 2020 at 5:00 pm

    Azure HPC Cache has received new SO27001, 27018 and 27701 certifications, adds new features to manage storage caching in performance-driven workloads and expands service access to Korea Central.

  • Azure Blueprint for FedRAMP High now available in new regions
    by Azure service updates on February 3, 2020 at 5:00 pm

    The Azure Blueprint for FedRAMP High is now available in both Azure Government and Azure Public regions. This is in addition to the Azure Blueprint for FedRAMP Moderate released in November, 2019.

  • Azure Databricks Is now HITRUST certified
    by Azure service updates on January 22, 2020 at 5:01 pm

    Azure Databricks is now certified for the HITRUST Common Security Framework (HITRUST CSF®), the most widely coveted security accreditation for the healthcare industry. With this certification, health care customers can now use volumes of clinical data to drive innovation using Azure Databricks, without any worry about security and risk.

  • Microsoft plans to establish new cloud datacenter region in Qatar
    by Azure service updates on December 11, 2019 at 8:00 pm

    Microsoft recently announced plans to establish a new cloud datacenter region in Qatar to deliver its intelligent, trusted cloud services and expand the Microsoft global cloud infrastructure to 55 cloud regions in 20 countries.

  • Azure NetApp Files HANA certification and new region availability
    by Azure service updates on November 4, 2019 at 5:00 pm

    Azure NetApp Files , one of the fastest growing bare-metal Azure services, has achieved SAP HANA certification for both scale-up and scale-out deployments.

  • Azure achieves TrueSight certification
    by Azure service updates on September 23, 2019 at 5:00 pm

    Azure achieved certification for TruSight, an industry-backed, best-practices third-party assessment utility.

  • IoT Plug and Play Preview is now available
    by Azure service updates on August 21, 2019 at 4:00 pm

    With IoT Plug and Play Preview, solution developers can start using Azure IoT Central to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play.

  • View linked GitHub activity from the Kanban board
    by Azure service updates on June 21, 2019 at 5:00 pm

    We continue to enhance the Azure Boards integration with GitHub. Now you can get information of your linked GitHub commits, pull requests and issues on your Kanban board. This information will give you a quick sense of where an item is at and allow you to directly navigate out to the GitHub commit, pull request, or issue for more details.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on April 2, 2019 at 9:08 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on March 26, 2019 at 9:06 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Azure South Africa regions are now available
    by Azure service updates on March 7, 2019 at 6:00 pm

    Azure services are available from new cloud regions in Johannesburg (South Africa North) and Cape Town (South Africa West), South Africa. The launch of these regions is a milestone for Microsoft.

  • Azure DevOps Roadmap update for 2019 Q1
    by Azure service updates on February 14, 2019 at 8:22 pm

    We updated the Features Timeline to provide visibility on our key investments for this quarter.

  • Azure Stack—FedRAMP High documentation now available
    by Azure service updates on November 1, 2018 at 7:00 pm

    FedRAMP High documentation is now available for Azure Stack customers.

  • Kubernetes on Azure Stack in preview
    by Azure service updates on November 1, 2018 at 7:00 pm

    We now support Kubernetes cluster deployment on Azure Stack, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS-Engine on Azure Stack.

  • Azure Stack Infrastructure—compliance certification guidance
    by Azure service updates on November 1, 2018 at 7:00 pm

    We have created documentation to describe how Azure Stack infrastructure satisfies regulatory technical controls for PCI-DSS and CSA-CCM.

  • Logic Apps is ISO, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant
    by Azure service updates on July 18, 2017 at 5:05 pm

    The Logic Apps feature of Azure App Service is now ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant.

  • Apache Kafka on HDInsight with Azure Managed Disks
    by Azure service updates on June 30, 2017 at 3:44 pm

    We're pleased to announce Apache Kafka with Azure Managed Disks Preview on the HDInsight platform. Users will now be able to deploy Kafka clusters with managed disks straight from the Azure portal, with no signup necessary.

  • Azure Backup for Windows Server system state
    by Azure service updates on June 14, 2017 at 10:54 pm

    Customers will now be able to to perform comprehensive, secure, and reliable Windows Server recoveries. We Will be extending the data backup capabilities of the Azure Backup agent so that it will now integrate with the Windows Server Backup feature, available natively on every Windows Server.

  • Azure Data Catalog is ISO, CSA STAR, HIPAA, EU Model Clauses compliant
    by Azure service updates on March 7, 2017 at 12:00 am

    Azure Data Catalog is ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, and EU Model Clauses compliant.

  • Azure compliance: Azure Cosmos DB certified for ISO 27001, HIPAA, and the EU Model Clauses
    by Azure service updates on March 25, 2016 at 10:00 am

    The Azure Cosmos DB team is excited to announce that Azure Cosmos DB is ISO 27001, HIPAA, and EU Model Clauses compliant.

  • Compliance updates for Azure public cloud
    by Azure service updates on March 16, 2016 at 9:24 pm

    We’re adding more certification coverage to our Azure portfolio, so regulated customers can take advantage of new services.

  • Protect and recover your production workloads in Azure
    by Azure service updates on October 2, 2014 at 5:00 pm

    With Azure Site Recovery, you can protect and recover your production workloads while saving on capital and operational expenditures.

  • ISO Certification expanded to include more Azure services
    by Azure service updates on January 17, 2014 at 1:00 am

    Azure ISO Certification expanded to include SQL Database, Active Directory, Traffic Manager, Web Sites, BizTalk Services, Media Services, Mobile Services, Service Bus, Multi-Factor Authentication, and HDInsight.


Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Top 30 AWS Certified Developer Associate Exam Tips

AWS Certified Developer Associate Exam Prep

You can translate the content of this page by selecting a language in the select box.

Top 30 AWS Certified Developer Associate Exam Tips

AWS Certified Developer Associate Exam Prep Urls

Get the free app at: android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

iOs: https://apps.apple.com/ca/app/aws-certified-developer-assoc/id1511211095

PRO version with mock exam android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

PRO version with mock exam ios: https://apps.apple.com/ca/app/aws-certified-dev-ass-dva-c01/id1506519319t

Top 30 AWS Certified Developer Associate Exam Tips
Top 30 AWS Certified Developer Associate Exam Tips

3

What to study: API Gateway [8-10% of Exam] Lambda / IAM / Cognito authorizers, Invalidation of cache, Integration types: proxy vs custom / AWS vs HTTP, Caching, Import / export OpenAPI Swagger specifications, Stage variables, Performance metrics,
AWS topics for DVA-C01: API Gateway

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.


8

What to study: ELASTIC BEANSTALK Deployment policies and blue/green, .ebextensions and config file usage, Updating deployments, Worker vs web tier, Deployment, packaging and files, code, commands used, Use cases, 
AWS topics for DVA-C01: AMAZON ELASTIC BEANSTALK

With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate our Cloud Practitioner certification. Get the books below to for real practice exams:

Use the promo codes: W6XM9XP4TWN9 or T6K9P4J9JPPR or 9LWMYKJ7TWPN or TN4NTERJYHY4 for AWS CCP eBook at Apple iBook store.


Use Promo Codes XKPHAATA6LRL 4XJRP9XLT9XL or LTFFY6JA33EL or HKRMTMTHFMAM or 4XHAFTWT4FN6 for AWS SAA-C03 eBook at Apple iBook store



Use Promo Codes EF46PT44LXPN or L6L9R9LKEFFR or TWELPA4JFJWM for Azure Fundamentals eBook at Apple iBook store.

11


We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:

What to study: CODECOMMIT, CODEBUILD, CODEDEPLOY, CODEPIPELINE, CODESTAR Know how each tool fits into the CI/CD pipeline, Various files used such as appspec.yml, buildspec.yml etc., Process for packaging and deployment, Deployment types with CodeDeploy including different , destination services (e.g. Lambda, ECS, EC2), Manual approvals with CodePipeline,
AWS topics for DVA-C01: CODECOMMIT, CODEBUILD, CODEDEPLOY, CODEPIPELINE, CODESTAR

12

What to study: AMAZON CLOUDFRONT  
AWS topics for DVA-C01: AMAZON CLOUDFRONT

16

What to study: STEP FUNCTIONS Step Functions state machines, Using to coordinate multiple Lambda function invocations
AWS topics for DVA-C01: STEP FUNCTIONS

18

Know what instance types can be launched from which types of AMIs, and which instance types require an HVM AMI
AWS HVM AMI

19

Have a good understanding of how Route53 supports all of the different DNS record types, and when you would use certain ones over others.
Route 53 supports all of the different DNS record types

20

Know which services have native encryption at rest within the region, and which do not.
AWS Services with native Encryption at rest

21

Kinesis Sharding:
#AWS Kinesis Sharding

22

Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )
#AWS Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )

23

Different types of Aurora Endpoints
#AWS Different types of Aurora Endpoints

24

The Default Termination Policy for Auto Scaling Group (Oldest launch configuration vs Instance Protection)
#AWS Default Termination Policy for Auto Scaling Group

25

Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes.
#AWS Cheat Sheet

26

Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam.
#AWS Exam Prep Video

27

Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions.
#AWS Exam Prep Video

28

Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably.
#AWS Exam Prep Video

29

Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS.
#AWS Faqs

30

Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too.
#AWS Services

31

Follow Neal K Davis on Linkedin and Read his updates about DVA-C01
#AWS Services

What is the AWS Certified Developer Associate Exam?

The AWS Certified Developer – Associate examination is intended for individuals who perform a development role and have one or more years of hands-on experience developing and maintaining an AWS-based application. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS

There are two types of questions on the examination:

  • Multiple-choice: Has one correct response and three incorrect responses (distractors).
  • Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer Associate info and details

The AWS Certified Developer Associate Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Practitioner Exam.

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap AWS Certification Exams Roadmap[/caption]

AWS Developer Associate Exam Whitepapers:

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.

Top

Online Training and Labs for AWS Certified Developer Associate Exam

Top

AWS Certified Developer Associate Jobs

Top 60 AWS Solution Architect Associate Exam Tips

aws certified solution architect exam prep

You can translate the content of this page by selecting a language in the select box.

Top 60 AWS Solution Architect Associate Exam Tips

SAA Exam Prep App urls

Solution Architect FREE version:
Google Play Store (Android)
Apple Store (iOS)
Pwa: Web
Amazon android: Amazon App Store (Android)
Microsoft/Windows10:

0 In a nutshell, below are the resources and apps that you need for SAA-C02 Exam Prep:

Read FAQs and learn more about the following topics in details: Load Balancing, DynamoDB, EBS, Multi-AZ RDS, Aurora, EFS, DynamoDB, NLB, ALB, Aurora, Auto Scalling, DynamoDB(latency), Aurora(performance), Multi-AZ RDS(high availability), Throughput Optimized EBS (highly sequential), Read the quizlet note cards about Cloudwatch, CloudTrail, KMS, ElasticBeanstalk, OpsWorks here. Read Dexter’s Barely passed AWS Cram Notes about RPO vs RTO, HA vs FT, Undifferentiated Heavy Lifting, Access Management Basics, Shared Responsibility Model, Cloud Service Models
AWS topics for SAA-CO1 and SAA-CO2

1

Know what instance types can be launched from which types of AMIs, and which instance types require an HVM AMI
AWS HVM AMI

2

Understand bastion hosts, and which subnet one might live on. Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instances (within private subnets) deeper within your network. When properly configured through the use of security groups and Network ACLs, the bastion essentially acts as a bridge to your private instances via the Internet.”
Bastion Hosts

3

Know the difference between Directory Service’s AD Connector and Simple AD. Use Simple AD if you need an inexpensive Active Directory–compatible service with the common directory features. AD Connector lets you simply connect your existing on-premises Active Directory to AWS.
AD Connector and Simple AD

4

Know how to enable cross-account access with IAM: To delegate permission to access a resource, you create an IAM role that has two policies attached. The permissions policy grants the user of the role the needed permissions to carry out the desired tasks on the resource. The trust policy specifies which trusted accounts are allowed to grant its users permissions to assume the role. The trust policy on the role in the trusting account is one-half of the permissions. The other half is a permissions policy attached to the user in the trusted account that allows that user to switch to, or assume the role.
Enable cross-account access with IAM

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.


5

Have a good understanding of how Route53 supports all of the different DNS record types, and when you would use certain ones over others.
Route 53 supports all of the different DNS record types

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.


6

Know which services have native encryption at rest within the region, and which do not.
AWS Services with native Encryption at rest

7

Know which services allow you to retain full admin privileges of the underlying EC2 instances
EC2 Full admin privilege

8

Know When Elastic IPs are free or not: If you associate additional EIPs with that instance, you will be charged for each additional EIP associated with that instance per hour on a pro rata basis. Additional EIPs are only available in Amazon VPC. To ensure efficient use of Elastic IP addresses, we impose a small hourly charge when these IP addresses are not associated with a running instance or when they are associated with a stopped instance or unattached network interface.
When are AWS Elastic IPs Free or not?

9

Know what are the four high level categories of information Trusted Advisor supplies.
#AWS Trusted advisor

10

Know how to troubleshoot a connection time out error when trying to connect to an instance in your VPC. You need a security group rule that allows inbound traffic from your public IP address on the proper port, you need a route that sends all traffic destined outside the VPC (0.0.0.0/0) to the Internet gateway for the VPC, the network ACLs must allow inbound and outbound traffic from your public IP address on the proper port, etc.
#AWS Connection time out error

With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate our Cloud Practitioner certification. Get the books below to for real practice exams:

Use the promo codes: W6XM9XP4TWN9 or T6K9P4J9JPPR or 9LWMYKJ7TWPN or TN4NTERJYHY4 for AWS CCP eBook at Apple iBook store.


Use Promo Codes XKPHAATA6LRL 4XJRP9XLT9XL or LTFFY6JA33EL or HKRMTMTHFMAM or 4XHAFTWT4FN6 for AWS SAA-C03 eBook at Apple iBook store



Use Promo Codes EF46PT44LXPN or L6L9R9LKEFFR or TWELPA4JFJWM for Azure Fundamentals eBook at Apple iBook store.

11

Be able to identify multiple possible use cases and eliminate non-use cases for SWF.
#AWS

12

Understand how you might set up consolidated billing and cross-account access such that individual divisions resources are isolated from each other, but corporate IT can oversee all of it.
#AWS Set up consolidated billing

13


We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:

Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
#AWS Make Change to Auto Scaling group

14

Know how you would go about making changes to an Auto Scaling group, fully understanding what you can and can’t change. “You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
#AWS Make Change to Auto Scaling group

15

Know which field you use to run a script upon launching your instance.
#AWS User data script

16

Know how DynamoDB (durable, and you can pay for strong consistency), Elasticache (great for speed, not so durable), and S3 (eventual consistency results in lower latency) compare to each other in terms of durability and low latency.
#AWS DynamoDB consistency

17

Know the difference between bucket policies, IAM policies, and ACLs for use with S3, and examples of when you would use each. “With IAM policies, companies can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, companies can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object.
#AWS Difference between bucket policies

18

Know when and how you can encrypt snapshots.
#AWS EBS Encryption

19

Understand how you can use ELB cross-zone load balancing to ensure even distribution of traffic to EC2 instances in multiple AZs registered with a load balancer.
#AWS ELB cross-zone load balancing

20

How would you allow users to log into the AWS console using active directory integration. Here is a link to some good reference material.
#AWS og into the AWS console using active directory integration

21

Spot instances are good for cost optimization, even if it seems you might need to fall back to On-Demand instances if you wind up getting kicked off them and the timeline grows tighter. The primary (but still not only) factor seems to be whether you can gracefully handle instances that die on you–which is pretty much how you should always design everything, anyway!
#AWS Spot instances

22

The term “use case” is not the same as “function” or “capability”. A use case is something that your app/system will need to accomplish, not just behaviour that you will get from that service. In particular, a use case doesn’t require that the service be a 100% turnkey solution for that situation, just that the service plays a valuable role in enabling it.
#AWS use case

23

There might be extra, unnecessary information in some of the questions (red herrings), so try not to get thrown off by them. Understand what services can and can’t do, but don’t ignore “obvious”-but-still-correct answers in favour of super-tricky ones.
#AWS Exam Answers: Distractors

24

If you don’t know what they’re trying to ask, in a question, just move on and come back to it later (by using the helpful “mark this question” feature in the exam tool). You could easily spend way more time than you should on a single confusing question if you don’t triage and move on.
#AWS Exa: Skip Questions that are vague and come back to them later

25

Some exam questions required you to understand features and use cases of: VPC peering, cross-account access, DirectConnect, snapshotting EBS RAID arrays, DynamoDB, spot instances, Glacier, AWS/user security responsibilities, etc.
#AWS

26

The 30 Day constraint in the S3 Lifecycle Policy before transitioning to S3-IA and S3-One Zone IA storage classes
#AWS S3 lifecycle policy

27

Enabling Cross-region snapshot copy for an AWS KMS-encrypted cluster
Redis Auth / Amazon MQ / IAM DB Authentication

#AWS Cross-region snapshot copy for an AWS KMS-encrypted cluster

28

Know that FTP is using TCP and not UDP (Helpful for questions where you are asked to troubleshoot the network flow)
TCP and UDP

29

Know the Difference between S3, EBS and EFS
#AWS Difference between S3, EBS and EFS

30

Kinesis Sharding:
#AWS Kinesis Sharding

31

Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )
#AWS Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )

32

Difference between OAI, Signed URL (CloudFront) and Pre-signed URL (S3)
#AWS Difference between OAI, Signed URL (CloudFront) and Pre-signed URL (S3)

33

Different types of Aurora Endpoints
#AWS Different types of Aurora Endpoints

34

The Default Termination Policy for Auto Scaling Group (Oldest launch configuration vs Instance Protection)
#AWS Default Termination Policy for Auto Scaling Group

35

Watch Acloud Guru Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready
#AWS ACloud Guru

36

Watch Linux Academy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready
#AWS Linux Academy

37

Watch Udemy Videos Lectures while commuting / lunch break – Reschedule the exam if you are not yet ready
#AWS Linux Academy

38

The Udemy practice test interface is good that it pinpoints your weak areas, so what I did was to re-watch all the videos that I got the wrong answers. Since I was able to gauge my exam readiness, I decided to reschedule my exam for 2 more weeks, to help me focus on completing the practice tests.
#AWS Udemy

39

Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes.
#AWS Cheat Sheet

40

Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam.
#AWS Exam Prep Video

41

Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions.
#AWS Exam Prep Video

42

Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably.
#AWS Exam Prep Video

43

Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS.
#AWS Faqs

44

Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too.
#AWS Services

45

Ensure to attend all quizzes after each section. Please do not treat these quizzes as your practice exams. These quizzes are designed to mostly test your knowledge on the section you just finished. The exam itself is designed to test you with scenarios and questions, where in you will need to recall and apply your knowledge of different AWS technologies/services you learn over multiple lectures.
#AWS Services

46

I, personally, do not recommend to attempt a practice exam or simulator exam until you have done all of the above. It was a little overwhelming for me. I had thoroughly gone over the videos. And understood the concepts pretty well, but once I opened exam simulator I felt the questions were pretty difficult. I also had a feeling that videos do not cover lot of topics. But later I realized, given the vastness of AWS Services and offerings it is really difficult to encompass all these services and their details in the course content. The fact that these services keep changing so often, does not help
#AWS Services

47

Go back and make a note of all topics, that you felt were unfamiliar for you. Go through the resources section and fiund links to AWS documentation. After going over them, you shoud gain at least 5-10% more knowledge on AWS. Have expectations from the online courses as a way to get thorough understanding of basics and strong foundations for your AWS knowledge. But once you are done with videos. Make sure you spend a lot of time on AWS documentation and FAQs. There are many many topics/sub topics which may not be covered in the course and you would need to know, atleast their basic functionalities, to do well in the exam.
#AWS Services

48

Once you start taking practice exams, it may seem really difficult at the beginning. So, please do not panic if you find the questions complicated or difficult. IMO they are designed or put in a way to sound complicated but they are not. Be calm and read questions very carefully. In my observation, many questions have lot of information which sometimes is not relevant to the solution you are expected to provide. Read the question slowly and read it again until you understand what is expected out of it.
#AWS Services

49

With each practice exam you will come across topics that you may need to scale your knowledge on or learn them from scratch.
#AWS Services

50

With each test and the subsequent revision, you will surely feel more confident.
There are 130 mins for questions. 2 mins for each question which is plenty of time.
At least take 8-10 practice tests. The ones on udemy/tutorialdojo are really good. If you are a acloudguru member. The exam simulator is really good.
Manage your time well. Keep patience. I saw someone mention in one of the discussions that do not under estimate the mental focus/strength needed to sit through 130 mins solving these questions. And it is really true.
Do not give away or waste any of those precious 130 mins. While answering flag/mark questions you think you are not completely sure. My advice is, even if you finish early, spend your time reviewing the answers. I could review 40 of my answers at the end of test. And I at least rectified 3 of them (which is 4-5% of total score, I think)
So in short – Put a lot of focus on making your foundations strong. Make sure you go through AWS Documentation and FAQs. Try and envision how all of the AWS components can fit together and provide an optimal solution. Keep calm.
This video gives outline about exam, must watch before or after Ryan’s course. #AWS Services

51

Walking you through how to best prepare for the AWS Certified Solutions Architect Associate SAA-C02 exam in 5 steps:
1. Understand the exam blueprint
2. Learn about the new topics included in the SAA-C02 version of the exam
3. Use the many FREE resources available to gain and deepen your knowledge
4. Enroll in our hands-on video course to learn AWS in depth
5. Use practice tests to fully prepare yourself for the exam and assess your exam readiness
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

52

Storage:
1. Know your different Amazon S3 storage tiers! You need to know the use cases, features and limitations, and relative costs; e.g. retrieval costs.
2. Amazon S3 lifecycle policies is also required knowledge — there are minimum storage times in certain tiers that you need to know.
3. For Glacier, you need to understand what it is, what it’s used for, and what the options are for retrieval times and fees.
4. For the Amazon Elastic File System (EFS), make sure you’re clear which operating systems you can use with it (just Linux).
5. For the Amazon Elastic Block Store (EBS), make sure you know when to use the different tiers including instance stores; e.g. what would you use for a datastore that requires the highest IO and the data is distributed across multiple instances? (Good instance store use case)
6. Learn about Amazon FSx. You’ll need to know about FSx for Windows and Lustre.
7. Know how to improve Amazon S3 performance including using CloudFront, and byte-range fetches — check out this whitepaper.
8. Make sure you understand about Amazon S3 object deletion protection options including versioning and MFA delete.
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

53

Compute:
1. You need to have a good understanding of the options for how to scale an Auto Scaling Group using metrics such as SQS queue depth, or numbers of SNS messages.
2. Know your different Auto Scaling policies including Target Tracking Policies.
3. Read up on High Performance Computing (HPC) with AWS. You’ll need to know about Amazon FSx with HPC use cases.
4. Know your placement groups. Make sure you can differentiate between spread, cluster and partition; e.g. what would you use for lowest latency? What about if you need to support an app that’s tightly coupled? Within an AZ or cross AZ?
5. Make sure you know the difference between Elastic Network Adapters (ENAs), Elastic Network Interfaces (ENIs) and Elastic Fabric Adapters (EFAs).
6. For the Amazon Elastic Container Service (ECS), make sure you understand how to assign IAM policies to ECS for providing S3 access. How can you decouple an ECS data processing process — Kinesis Firehose or SQS?
7. Make sure you’re clear on the different EC2 pricing models including Reserved Instances (RI) and the different RI options such as scheduled RIs.
8. Make sure you know the maximum execution time for AWS Lambda (it’s currently 900 seconds or 15 minutes).
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

54

Network
1. Understand what AWS Global Accelerator is and its use cases.
2. Understand when to use CloudFront and when to use AWS Global Accelerator.
3. Make sure you understand the different types of VPC endpoint and which require an Elastic Network Interface (ENI) and which require a route table entry.
4. You need to know how to connect multiple accounts; e.g. should you use VPC peering or a VPC endpoint?
5. Know the difference between PrivateLink and ClassicLink.
6. Know the patterns for extending a secure on-premises environment into AWS.
7. Know how to encrypt AWS Direct Connect (you can use a Virtual Private Gateway / AWS VPN).
8. Understand when to use Direct Connect vs Snowball to migrate data — lead time can be an issue with Direct Connect if you’re in a hurry.
9. Know how to prevent circumvention of Amazon CloudFront; e.g. Origin Access Identity (OAI) or signed URLs / signed cookies.
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

55

Databases
1. Make sure you understand Amazon Aurora and Amazon Aurora Serverless.
2. Know which RDS databases can have Read Replicas and whether you can read from a Multi-AZ standby.
3. Know the options for encrypting an existing RDS database; e.g. only at creation time otherwise you must encrypt a snapshot and create a new instance from the snapshot.
4. Know which databases are key-value stores; e.g. Amazon DynamoDB.
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

56

Application Integration
1. Make sure you know the use cases for the Amazon Simple Queue Service (SQS), and Simple Notification Service (SNS).
2. Understand the differences between Amazon Kinesis Firehose and SQS and when you would use each service.
3. Know how to use Amazon S3 event notifications to publish events to SQS — here’s a good “How To” article.
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

57

Management and Governance
1. You’ll need to know about AWS Organizations; e.g. how to migrate an account between organizations.
2. For AWS Organizations, you also need to know how to restrict actions using service control policies attached to OUs.
3. Understand what AWS Resource Access Manager is.
AWS CERTIFIED SOLUTIONS ARCHITECT SAA-C02 : HOW TO BEST PREPARE IN 5 STEPS

About this App

The AWS Certified Solution Architect Associate Examination reparation and Readiness Quiz App (SAA-C01, SAA-C01, SAA) Prep App helps you prepare and train for the AWS Certification Solution Architect Associate Exam with various questions and answers dumps.

This App provide updated Questions and Answers, an Intuitive Responsive Interface allowing to browse questions horizontally and browse tips and resources vertically after completing a quiz.

Features:

  • 100+ Questions and Answers updated frequently to get you AWS certified.
  • Quiz with score tracker, countdown timer, highest score saving. Vie Answers after completing the quiz for each category.
  • Can only see answers after completing the quiz.
  • Show/Hide button option for answers. Link to PRO Version to see all answers for each category
  • Ability to navigate through questions for each category using next and previous button.
  • Resource info page about the answer for each category and Top 60 Tips to succeed in the exam.
  • Prominent Cloud Evangelist latest tweets and Technology Latest News Feed
  • The app helps you study and practice from your mobile device with an intuitive interface.
  • SAA-C01 and SAA-C02 compatible
  • Resource info page about the answer for each category.
  • Helps you study and practice from your mobile device with an intuitive interface.

The questions and Answers are divided in 4 categories:

  • Design High Performing Architectures,
  • Design Cost Optimized Architectures,
  • Design Secure Applications And Architectures,
  • Design Resilient Architecture,

The questions and answers cover the following topics: AWS VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, AWS S3, AWS DynamoDB, CloudWatch , AWS SES, Amazon Lex, AWS EBS, AWS ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, AWS Simply Monthly Calculator, AWS cost calculator, Ec2 pricing on-demand, AWS Pricing, AWS Pay As You Go, AWS No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Web hosting on S3, S3 storage classes, AWS Regions, AWS Availability Zones, Trusted Advisor, Various architectural Questions and Answers about AWS, AWS SDK, AWS EBS Volumes, EC2, S3, Containers, KMS, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Bastion Hosts, S3 lifecycle policy, kinesis sharing, AWS KMS, Design High Performing Architectures, Design Cost Optimized Architectures, Design Secure Applications And Architectures, Design Resilient Architecture, AWS vs Azure vs Google Cloud, Resources, Questions, AWS, AWS SDK, AWS EBS Volumes, AWS read replicas, Cloudfront, API Gateway, AWS Snapshots, Auto shutdown Ec2 instances, High Availability, RDS, DynamoDB, Elasticity, AWS Virtual Machines, AWS Caching, AWS Containers, AWS Architecture, AWS Ec2, AWS S3, AWS Security, AWS Lambda, Load Balancing, DynamoDB, EBS, Multi-AZ RDS, Aurora, EFS, DynamoDB, NLB, ALB, Aurora, Auto Scaling, DynamoDB(latency), Aurora(performance), Multi-AZ RDS(high availability), Throughput Optimized EBS (highly sequential), SAA-CO1, SAA-CO2, Cloudwatch, CloudTrail, KMS, ElasticBeanstalk, OpsWorks, RPO vs RTO, HA vs FT, Undifferentiated Heavy Lifting, Access Management Basics, Shared Responsibility Model, Cloud Service Models, etc…

The resources sections cover the following areas: Certification, AWS training, Mock Exam Preparation Tips, Cloud Architect Training, Cloud Architect Knowledge, Cloud Technology, cloud certification, cloud exam preparation tips, cloud solution architect associate exam, certification practice exam, learn aws free, amazon cloud solution architect, question dumps, acloud guru links, tutorial dojo links, linuxacademy links, latest aws certification tweets, and post from reddit, quota, linkedin, medium, cloud exam preparation tips, aws cloud solution architect associate exam, aws certification practice exam, cloud exam questions, learn aws free, amazon cloud solution architect, amazon cloud certified solution architect associate exam questions, as certification dumps, google cloud, azure cloud, acloud, learn google cloud, learn azure cloud, cloud comparison, etc.

Abilities Validated by the Certification:

  • Effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies
  • Define a solution using architectural design principles based on customer requirements
  • Provide implementation guidance based on best practices to the organization throughout the life cycle of the project

Recommended Knowledge for the Certification:

  • One year of hands-on experience designing available, cost-effective, fault-tolerant, and scalable distributed systems on AWS.
  • Hands-on experience using compute, networking, storage, and database AWS services.
  • Hands-on experience with AWS deployment and management services.
  • Ability to identify and define technical requirements for an AWS-based application.
  • bility to identify which AWS services meet a given technical requirement.
  • Knowledge of recommended best practices for building secure and reliable applications on the AWS platform.
  • An understanding of the basic architectural principles of building in the AWS Cloud.
  • An understanding of the AWS global infrastructure.
  • An understanding of network technologies as they relate to AWS.
  • An understanding of security features and tools that AWS provides and how they relate to traditional services.

Note and disclaimer: We are not affiliated with AWS or Amazon or Microsoft or Google. The questions are put together based on the certification study guide and materials available online. We also receive questions and answers from anonymous users and we vet to make sure they are legitimate. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

What is the AWS Certified Solution Architect Associate Exam?

This exam validates an examinee’s ability to effectively demonstrate knowledge of how to architect and deploy secure and robust applications on AWS technologies. It validates an examinee’s ability to:

  • Define a solution using architectural design principles based on customer requirements.
  • Multiple-response: Has two correct responses out of five options.

There are two types of questions on the examination:

  • Multiple-choice: Has one correct response and three incorrect responses (distractors).
  • Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Solution Architect Associate info and details

The AWS Certified Solution Architect Associate Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Practitioner Exam.

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap AWS Certification Exams Roadmap[/caption]

AWS Solution Architect Associate Exam Whitepapers:

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.

Top

Online Training and Labs for AWS Certified Solution Architect Associate Exam

Top

AWS Certified Solution Architect Associate Jobs

AWS Certification and Training Apps for all platforms:

AWS Cloud practitioner FREE version:

AWS Certified Cloud practitioner for the web:pwa

AWS Certified Cloud practitioner Exam Prep App for iOS

AWS Certified Cloud practitioner Exam Prep App for Microsoft/Windows10

AWS Certified Cloud practitioner Exam Prep App for Android (Google Play Store)

AWS Certified Cloud practitioner Exam Prep App for Android (Amazon App Store)

AWS Certified Cloud practitioner Exam Prep App for Android (Huawei App Gallery)

AWS Solution Architect FREE version:

AWS Certified Solution Architect Associate Exam Prep App for iOS: https://apps.apple.com/ca/app/solution-architect-assoc-quiz/id1501225766

Solution Architect Associate for Android Google Play

AWS Certified Solution Architect Associate Exam Prep App for the eb: Pwa

AWS Certified Solution Architect Associate Exam Prep App for Amazon android

‪‬

AWS Certified Cloud practitioner Exam Prep App for Microsoft/Windows10

AWS Certified Cloud practitioner Exam Prep App for Huawei App Gallery

AWS Cloud Practitioner PRO Versions:

AWS Certified Cloud practitioner PRO Exam Prep App for iOS

AWS Certified Cloud Practitioner PRO Associate Exam Prep App for android google

AWS Certified Cloud practitioner Exam Prep App for Amazon android

AWS Certified Cloud practitioner Exam Prep App for Windows 10

AWS Certified Cloud practitioner Exam Prep PRO App for Android (Huawei App Gallery) Coming soon

AWS Solution Architect PRO

AWS Certified Solution Architect Associate PRO versions for iOS

AWS Certified Solution Architect Associate PRO Exam Prep App for Android google

AWS Certified Solution Architect Associate PRO Exam Prep App for Windows10

AWS Certified Solution Architect Associate PRO Exam Prep App for Amazon android

Huawei App Gallery: Coming soon

AWS Certified Developer Associates Free version:

AWS Certified Developer Associates for Android (Google Play)

AWS Certified Developer Associates Web/PWA

AWS Certified Developer Associates for iOs

AWS Certified Developer Associates for Android (Huawei App Gallery)

AWS Certified Developer Associates for windows 10 (Microsoft App store)

Amazon App Store: Coming soon

AWS Developer Associates PRO version

PRO version with mock exam for android (Google Play)

PRO version with mock exam ios

AWS Certified Developer Associates PRO for Android (Amazon App Store): Coming Soon

AWS Certified Developer Associates PRO for Android (Huawei App Gallery): Coming soon

AWS certification exam quiz apps for all platforms

AWS certification Quiz App for all platforms

You can translate the content of this page by selecting a language in the select box.

Below is a listing of AWS certification exam quiz apps for all platforms:

AWS Certified Cloud practitioner Exam Prep FREE version: CCP, CLF-C01

IOS: https://apps.apple.com/ca/app/aws-certified-cloud-pract-prep/id1488832117

Microsoft/Windows10:https://www.microsoft.com/en-ca/p/aws-certified-cloud-practitioner-exam-preparation/9ns1xttj1d5s

Google play: https://play.google.com/store/apps/details?id=com.awscloudpractitonerexamprep.enoumen

Amazon App Store (Android): https://www.amazon.com/dp/B085MFT53J/ref=mp_s_a_1_2?keywords=cloud+practitioner&qid=1583633225&s=mobile-apps&sr=1-2

Web/PWA: https://aws-cloud-practitioner-exam.firebaseapp.com

Cloud Practitioner PRO Versions:

ios: https://apps.apple.com/ca/app/aws-certified-cloud-pract-pro/id1501104845

android google : https://play.google.com/store/apps/details?id=com.awscloudpractitonerexampreppro.enoumen

Amazon android: https://www.amazon.com/dp/B085HGKRMG/ref=pe_385040_118058080_TE_M1DP

Windows 10: https://www.microsoft.com/en-ca/p/aws-certified-cloud-practitioner-exam-preparation-quiz-pro/9phhz236gh4d

AWS Certified Solution Architect Associate Exam Prep FREE version: SAA, SAA-C01, SAA-C02

Google: https://play.google.com/store/apps/details?id=com.awssolutionarchitectassociateexamprep.app

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.


iOS: https://apps.apple.com/ca/app/solution-architect-assoc-quiz/id1501225766

Web(All platforms): https://awscertifiedsolutionarchitectexamprep.com/

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.


Amazon android: ‪http://www.amazon.com/dp/B085MG99H9/ref=cm_sw_r_tw_awdm_xs_pqfzEb4HSYJV1

Microsoft/Windows10: https://www.microsoft.com/en-ca/p/aws-certified-solution-architect-associate-exam-prep/9ncch3cgskmp

Solution Architect PRO versions:

Ios: https://apps.apple.com/ca/app/solution-architect-assoc-pro/id1501465417

Android google: https://play.google.com/store/apps/details?id=com.awssolutionarchitectassociateexampreppro.app

Windows10: not available yet

Amazon android: https://www.amazon.com/dp/B085HR898X/ref=pe_385040_118058080_TE_M1DP

AWS Certified Developer Associate Exam Prep: DVA-C01

android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

iOs: https://apps.apple.com/ca/app/aws-certified-developer-assoc/id1511211095

PRO version with mock exams android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate our Cloud Practitioner certification. Get the books below to for real practice exams:

Use the promo codes: W6XM9XP4TWN9 or T6K9P4J9JPPR or 9LWMYKJ7TWPN or TN4NTERJYHY4 for AWS CCP eBook at Apple iBook store.


Use Promo Codes XKPHAATA6LRL 4XJRP9XLT9XL or LTFFY6JA33EL or HKRMTMTHFMAM or 4XHAFTWT4FN6 for AWS SAA-C03 eBook at Apple iBook store



Use Promo Codes EF46PT44LXPN or L6L9R9LKEFFR or TWELPA4JFJWM for Azure Fundamentals eBook at Apple iBook store.

PRO version with mock exam ios: https://apps.apple.com/ca/app/aws-certified-dev-ass-dva-c01/id1506519319t

AWS Solution Architect FREE version:

Google Play (Android): https://play.google.com/store/apps/details?id=com.awssolutionarchitectassociateexamprep.app

iOS: https://apps.apple.com/ca/app/solution-architect-assoc-quiz/id1501225766

Pwa: https://awscertifiedsolutionarchitectexamprep.com


We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:

Amazon android: http://www.amazon.com/dp/B085MG99H9/ref=cm_sw_r_tw_awdm_xs_pqfzEb4HSYJV1‬

Microsoft/Windows10: https://www.microsoft.com/en-ca/p/aws-certified-solution-architect-associate-exam-prep/9ncch3cgskmp

AWS Cloud Practitioner PRO Versions:

ios: https://apps.apple.com/ca/app/aws-certified-cloud-pract-pro/id1501104845

Android google : https://play.google.com/store/apps/details?id=com.awscloudpractitonerexampreppro.enoumen

Amazon android: https://www.amazon.com/dp/B085HGKRMG/ref=pe_385040_118058080_TE_M1DP

Microsoft/Windows 10: https://www.microsoft.com/en-ca/p/aws-certified-cloud-practitioner-exam-preparation-quiz-pro/9phhz236gh4d

AWS Solution Architect PRO versions:

Ios: https://apps.apple.com/ca/app/solution-architect-assoc-pro/id1501465417

Android google: https://play.google.com/store/apps/details?id=com.awssolutionarchitectassociateexampreppro.app

Windows10: not available yet

Amazon android: https://www.amazon.com/dp/B085HR898X/ref=pe_385040_118058080_TE_M1DP

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump

You can translate the content of this page by selecting a language in the select box.

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump.

Welcome to AWS Certified Developer Associate Exam Preparation:

Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.


1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

Invest in your future today by enrolling in this Azure Fundamentals - Microsoft Azure Certification and Training ebook below. This Azure Fundamentals Exam Prep Book will prepare you for the Azure Fundamentals AZ900 Certification Exam.


2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

With average increases in salary of over 25% for certified individuals, you’re going to be in a much better position to secure your dream job or promotion if you earn your AWS Certified Solutions Architect Associate our Cloud Practitioner certification. Get the books below to for real practice exams:

Use the promo codes: W6XM9XP4TWN9 or T6K9P4J9JPPR or 9LWMYKJ7TWPN or TN4NTERJYHY4 for AWS CCP eBook at Apple iBook store.


Use Promo Codes XKPHAATA6LRL 4XJRP9XLT9XL or LTFFY6JA33EL or HKRMTMTHFMAM or 4XHAFTWT4FN6 for AWS SAA-C03 eBook at Apple iBook store



Use Promo Codes EF46PT44LXPN or L6L9R9LKEFFR or TWELPA4JFJWM for Azure Fundamentals eBook at Apple iBook store.

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)


We know you like your hobbies and especially coding, We do too, but you should find time to build the skills that’ll drive your career into Six Figures. Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career. 85% of hiring managers say cloud certifications make a candidate more attractive. Start your cloud journey with these excellent books below:

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Answer: A

Reference: What is Continuous Integration?

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment

Answer: B

Reference: Troubleshooting AWS CloudFormation


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Answer: C.

Reference: CodeDeploy AppSpec File Reference

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

Question: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.
Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
ReferenceText: Getting Started with AWS Step Functions.
ReferenceUrl: https://aws.amazon.com/step-functions/getting-started/
Category: Development

Welcome to AWS Certified Developer Associate Exam Preparation: Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
AWS Developer Associates DVA-C01 PRO
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.
Reference: AWS Autoscalling

Top

Q30: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

Answer:


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda

Top

Q31: An organization is using an Amazon ElastiCache cluster in front of their Amazon RDS instance. The organization would like the Developer to implement logic into the code so that the cluster only retrieves data from RDS when there is a cache miss. What strategy can the Developer implement to achieve this?

  • A. Lazy loading
  • B. Write-through
  • C. Error retries
  • D. Exponential backoff

Answer:


Answer – A
Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. All other options are incorrect.
Reference: Caching Strategies

Top

Q32: A developer is writing an application that will run on Ec2 instances and read messages from SQS queue. The nessages will arrive every 15-60 seconds. How should the Developer efficiently query the queue for new messages?

  • A. Use long polling
  • B. Set a custom visibility timeout
  • C. Use short polling
  • D. Implement exponential backoff


Answer – A Long polling will help insure that the applications make less requests for messages in a shorter period of time. This is more cost effective. Since the messages are only going to be available after 15 seconds and we don’t know exacly when they would be available, it is better to use Long Polling.
Reference: Amazon SQS Long Polling

Top

Q33: You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame?

  • A. Canary10Percent5Minutes
  • B. Linear10PercentEvery10Minutes
  • C. Canary10Percent15Minutes
  • D. Linear10PercentEvery1Minute


Answer – A
With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes.
Reference: Gradual Code Deployment

Top

Q34: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q35: You are using AWS Envelope Encryption for encrypting all sensitive data. Which of the followings is True with regards to Envelope Encryption?

  • A. Data is encrypted be encrypting Data key which is further encrypted using encrypted Master Key.
  • B. Data is encrypted by plaintext Data key which is further encrypted using encrypted Master Key.
  • C. Data is encrypted by encrypted Data key which is further encrypted using plaintext Master Key.
  • D. Data is encrypted by plaintext Data key which is further encrypted using plaintext Master Key.


Answer – D
With Envelope Encryption, unencrypted data is encrypted using plaintext Data key. This Data is further encrypted using plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys.
Reference: AWS Key Management Service Concepts

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q36: You are developing an application that will be comprised of the following architecture –

  1. A set of Ec2 instances to process the videos.
  2. These (Ec2 instances) will be spun up by an autoscaling group.
  3. SQS Queues to maintain the processing messages.
  4. There will be 2 pricing tiers.

How will you ensure that the premium customers videos are given more preference?

  • A. Create 2 Autoscaling Groups, one for normal and one for premium customers
  • B. Create 2 set of Ec2 Instances, one for normal and one for premium customers
  • C. Create 2 SQS queus, one for normal and one for premium customers
  • D. Create 2 Elastic Load Balancers, one for normal and one for premium customers.


Answer – C
The ideal option would be to create 2 SQS queues. Messages can then be processed by the application from the high priority queue first.<br? The other options are not the ideal options. They would lead to extra costs and also extra maintenance.
Reference: SQS

Top

Q37: You are developing an application that will interact with a DynamoDB table. The table is going to take in a lot of read and write operations. Which of the following would be the ideal partition key for the DynamoDB table to ensure ideal performance?

  • A. CustomerID
  • B. CustomerName
  • C. Location
  • D. Age


Answer- A
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on..
Use composite attributes. Try to combine more than one attribute to form a unique key.
Reference: Choosing the right DynamoDB Partition Key

Top

Q38: A developer is making use of AWS services to develop an application. He has been asked to develop the application in a manner to compensate any network delays. Which of the following two mechanisms should he implement in the application?

  • A. Multiple SQS queues
  • B. Exponential backoff algorithm
  • C. Retries in your application code
  • D. Consider using the Java sdk.


Answer- B. and C.
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values, and should be set based on the operation being performed, as well as other local factors, such as network latency.
Reference: Error Retries and Exponential Backoff in AWS

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q39: An application is being developed that is going to write data to a DynamoDB table. You have to setup the read and write throughput for the table. Data is going to be read at the rate of 300 items every 30 seconds. Each item is of size 6KB. The reads can be eventual consistent reads. What should be the read capacity that needs to be set on the table?

  • A. 10
  • B. 20
  • C. 6
  • D. 30


Answer – A

Since there are 300 items read every 30 seconds , that means there are (300/30) = 10 items read every second.
Since each item is 6KB in size , that means , 2 reads will be required for each item.
So we have total of 2*10 = 20 reads for the number of items per second
Since eventual consistency is required , we can divide the number of reads(20) by 2 , and in the end we get the Read Capacity of 10.

Reference: Read/Write Capacity Mode


Top

Q40: You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load balancer. You have been requested to monitor the incoming connections to the Elastic Load Balancer. Which of the below options can suffice this requirement?

  • A. Use AWS CloudTrail with your load balancer
  • B. Enable access logs on the load balancer
  • C. Use a CloudWatch Logs Agent
  • D. Create a custom metric CloudWatch lter on your load balancer


Answer – B
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Reference: Access Logs for Your Application Load Balancer

Top

Q41: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q42: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q43: Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

  • A. Publish your data to CloudWatch Logs, and congure your application to Autoscale to handle the load on demand.
  • B. Publish your log data to an Amazon S3 bucket.  Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is congured to pull down your log les stored an Amazon S3
  • C. Post your log data to an Amazon Kinesis data stream, and subscribe your log-processing application so that is congured to process your logging data.
  • D. Create a multi-AZ Amazon RDS MySQL cluster, post the logging data to MySQL, and run a map reduce job to retrieve the required information on user counts.

Answer:


Answer – C
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data.
Reference: Amazon Kinesis

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q44: You’ve been instructed to develop a mobile application that will make use of AWS services. You need to decide on a data store to store the user sessions. Which of the following would be an ideal data store for session management?

  • A. AWS Simple Storage Service
  • B. AWS DynamoDB
  • C. AWS RDS
  • D. AWS Redshift

Answer:


Answer – B
DynamoDB is a alternative solution which can be used for storage of session management. The latency of access to data is less , hence this can be used as a data store for session management
Reference: Scalable Session Handling in PHP Using Amazon DynamoDB

Top

Q45: Your application currently interacts with a DynamoDB table. Records are inserted into the table via the application. There is now a requirement to ensure that whenever items are updated in the DynamoDB primary table , another record is inserted into a secondary table. Which of the below feature should be used when developing such a solution?

  • A. AWS DynamoDB Encryption
  • B. AWS DynamoDB Streams
  • C. AWS DynamoDB Accelerator
  • D. AWSTable Accelerator


Answer – B
DynamoDB Streams Use Cases and Design Patterns This post describes some common use cases you might encounter, along with their design options and solutions, when migrating data from relational data stores to Amazon DynamoDB. We will consider how to manage the following scenarios:

  • How do you set up a relationship across multiple tables in which, based on the value of an item from one table, you update the item in a second table?
  • How do you trigger an event based on a particular transaction?
  • How do you audit or archive transactions?
  • How do you replicate data across multiple tables (similar to that of materialized views/streams/replication in relational data stores)?

Relational databases provide native support for transactions, triggers, auditing, and replication. Typically, a transaction in a database refers to performing create, read, update, and delete (CRUD) operations against multiple tables in a block. A transaction can have only two states—success or failure. In other words, there is no partial completion. As a NoSQL database, DynamoDB is not designed to support transactions. Although client-side libraries are available to mimic the transaction capabilities, they are not scalable and cost-effective. For example, the Java Transaction Library for DynamoDB creates 7N+4 additional writes for every write operation. This is partly because the library holds metadata to manage the transactions to ensure that it’s consistent and can be rolled back before commit. You can use DynamoDB Streams to address all these use cases. DynamoDB Streams is a powerful service that you can combine with other AWS services to solve many similar problems. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. AWS maintains separate endpoints for DynamoDB and DynamoDB Streams. To work with database tables and indexes, your application must access a DynamoDB endpoint. To read and process DynamoDB Streams records, your application must access a DynamoDB Streams endpoint in the same Region. All of the other options are incorrect since none of these would meet the core requirement.
Reference: DynamoDB Streams Use Cases and Design Patterns


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q46: An application has been making use of AWS DynamoDB for its back-end data store. The size of the table has now grown to 20 GB , and the scans on the table are causing throttling errors. Which of the following should now be implemented to avoid such errors?

  • A. Large Page size
  • B. Reduced page size
  • C. Parallel Scans
  • D. Sequential scans

Answer – B
When you scan your table in Amazon DynamoDB, you should follow the DynamoDB best practices for avoiding sudden bursts of read activity. You can use the following technique to minimize the impact of a scan on a table’s provisioned throughput. Reduce page size Because a Scan operation reads an entire page (by default, 1 MB), you can reduce the impact of the scan operation by setting a smaller page size. The Scan operation provides a Limit parameter that you can use to set the page size for your request. Each Query or Scan request that has a smaller page size uses fewer read operations and creates a “pause” between each request. For example, suppose that each item is 4 KB and you set the page size to 40 items. A Query request would then consume only 20 eventually consistent read operations or 40 strongly consistent read operations. A larger number of smaller Query or Scan operations would allow your other critical requests to succeed without throttling.
Reference1: Rate-Limited Scans in Amazon DynamoDB

Reference2: Best Practices for Querying and Scanning Data


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q47: Which of the following is correct way of passing a stage variable to an HTTP URL ? (Select TWO.)

  • A. http://example.com/${}/prod
  • B. http://example.com/${stageVariables.}/prod
  • C. http://${stageVariables.}.example.com/dev/operation
  • D. http://${stageVariables}.example.com/dev/operation
  • E. http://${}.example.com/dev/operation
  • F. http://example.com/${stageVariables}/prod


Answer – B. and C.
A stage variable can be used as part of HTTP integration URL as in following cases, ·         A full URI without protocol ·         A full domain ·         A subdomain ·         A path ·         A query string In the above case , option B & C displays stage variable as a path & sub-domain.
Reference: Amazon API Gateway Stage Variables Reference

Top

Q48: Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard?

  • A. AWS Elastic Beanstalk
  • B. AWS OpsWork
  • C. AWS Cloudformation
  • D. AWS SQS


Answer – B
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.
Reference: AWS OpsWorks

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q49: Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure?

  • A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website
  • B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access.
  • C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials
  • D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.


Answer – C
With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used
Reference: About Web Identity Federation

Top

Q50: Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose?

  • A. Cognito Data
  • B. Cognito Events
  • C. Cognito Streams
  • D. Cognito Callbacks


Answer – C
Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams
Reference:

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q51: You’ve developed a set of scripts using AWS Lambda. These scripts need to access EC2 Instances in a VPC. Which of the following needs to be done to ensure that the AWS Lambda function can access the resources in the VPC. Choose 2 answers from the options given below

  • A. Ensure that the subnet ID’s are mentioned when conguring the Lambda function
  • B. Ensure that the NACL ID’s are mentioned when conguring the Lambda function
  • C. Ensure that the Security Group ID’s are mentioned when conguring the Lambda function
  • D. Ensure that the VPC Flow Log ID’s are mentioned when conguring the Lambda function


Answer: A and C.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPCspecific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Reference: Configuring a Lambda Function to Access Resources in an Amazon VPC

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q52: You’ve currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can’t see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case?

  • A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment.
  • B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk
  • C. Consider using Packer to create a custom platform
  • D. Consider deploying your application using the Elastic Container Service


Answer – C
Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn’t provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform’s software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings.
Reference: AWS Elastic Beanstalk Custom Platforms

Top

Q53: Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.

  • A. 10
  • B. 160
  • C. 155
  • D. 16


Answer – B.
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5Kb in size. What would be the required provisioned write throughput for best performance? Choose the correct answer from the options below.
Reference: Read/Write Capacity Mode

Top

Top

Q54: Which AWS Service can be used to automatically install your application code onto EC2, on premises systems and Lambda?

  • A. CodeCommit
  • B. X-Ray
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q55: Which AWS service can be used to compile source code, run tests and package code?

  • A. CodePipeline
  • B. CodeCommit
  • C. CodeBuild
  • D. CodeDeploy


Answer: D

Reference: AWS CodeDeploy Answer: B.

Reference: AWS CodeBuild


Top

Q56: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

  • A. Set the Rollback on failure radio button to No in the CloudFormation console
  • B. Set Termination Protection to Enabled in the CloudFormation console
  • C. Use the –disable-rollback flag with the AWS CLI
  • D. Use the –enable-termination-protection protection flag with the AWS CLI

Answer: A. and C.

Reference: Protecting a Stack From Being Deleted

Top

Q57: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

  • A. Continuous Integration
  • B. Continuous Deployment
  • C. Continuous Delivery
  • D. Continuous Development

Answer: A

Reference: What is Continuous Integration?

Top

Q58: When deploying application code to EC2, the AppSpec file can be written in which language?

  • A. JSON
  • B. JSON or YAML
  • C. XML
  • D. YAML

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q59: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

  • A. CloudFormation will rollback only the failed components
  • B. CloudFormation will rollback the entire stack
  • C. Failed component will remain available for debugging purposes
  • D. CloudFormation will ask you if you want to continue with the deployment

Answer: B

Reference: Troubleshooting AWS CloudFormation


Top

Q60: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

  • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
  • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
  • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
  • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.

Answer: C

Reference: Getting Started with Amazon SNS


Top

Q61: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

  • A. CodeCommit
  • B. CodeBuild
  • C. CodePipeline
  • D. ElasticFileSystem

Answer: A

Reference: AWS CodeCommit


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q62: You are using CloudFormation to create a new S3 bucket, which of the following sections would you use to define the properties of your bucket?

  • A. Conditions
  • B. Parameters
  • C. Outputs
  • D. Resources

Answer: D

Reference: Resources


Top

Q63: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template would you use to define these?

  • A. Transforms
  • B. Outputs
  • C. Resources
  • D. Instances

Answer: C.
The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.
Reference: Resources

Top

Q64: Which AWS service can be used to fully automate your entire release process?

  • A. CodeDeploy
  • B. CodePipeline
  • C. CodeCommit
  • D. CodeBuild

Answer: B.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

Reference: AWS CodePipeline


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q65: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

  • A. Outputs
  • B. Transforms
  • C. Resources
  • D. Exports

Answer: A.
Outputs is used to output user defines data relating to the reources you have built and can also used as input to another CloudFormation stack.
Reference: CloudFormation Outputs

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q66: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

  • A. Inputs
  • B. Resources
  • C. Transforms
  • D. Files

Answer: C.
Transforms is used to reference code located in S3 and also specififying the use of the Serverless Application Model (SAM) for Lambda deployments.
Reference: Transforms

Top

Q67: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?

  • A. buildspec.yml
  • B. appspec.json
  • C. appspec.yml
  • D. buildspec.json

Answer: C.

Reference: CodeDeploy AppSpec File Reference

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q68: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

  • A. Share the code using an EBS volume
  • B. Copy and paste the code into the template each time you need to use it
  • C. Use a cloudformation nested stack
  • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.

Answer: C.

Reference: Working with Nested Stacks

Top

Q69: In the CodeDeploy AppSpec file, what are hooks used for?

  • A. To reference AWS resources that will be used during the deployment
  • B. Hooks are reserved for future use
  • C. To specify files you want to copy during the deployment.
  • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle

Answer: D.
The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

Reference: AppSpec ‘hooks’ Section

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q70: Which command can you use to encrypt a plain text file using CMK?

  • A. aws kms-encrypt
  • B. aws iam encrypt
  • C. aws kms encrypt
  • D. aws encrypt

Answer: C.
aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext fileb://ExamplePlaintextFile –output text –query CiphertextBlob > C:\Temp\ExampleEncryptedFile.base64

Reference: AWS CLI Encrypt

Top

Q72: Which of the following is an encrypted key used by KMS to encrypt your data

  • A. Custmoer Mamaged Key
  • B. Encryption Key
  • C. Envelope Key
  • D. Customer Master Key

Answer: C.
Your Data key also known as the Enveloppe key is encrypted using the master key.This approach is known as Envelope encryption.
Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

Reference: Envelope Encryption

Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q73: Which of the following statements are correct? (Choose 2)

  • A. The Customer Master Key is used to encrypt and decrypt the Envelope Key or Data Key
  • B. The Envelope Key or Data Key is used to encrypt and decrypt plain text files.
  • C. The envelope Key or Data Key is used to encrypt and decrypt the Customer Master Key.
  • D. The Customer MasterKey is used to encrypt and decrypt plain text files.

Answer: A. and B.

Reference: AWS Key Management Service Concepts

Top

 
 

Q74: Which of the following statements is correct in relation to kMS/ (Choose 2)

  • A. KMS Encryption keys are regional
  • B. You cannot export your customer master key
  • C. You can export your customer master key.
  • D. KMS encryption Keys are global

Answer: A. and B.

Reference: AWS Key Management Service FAQs

Q75:  A developer is preparing a deployment package for a Java implementation of an AWS Lambda function. What should the developer include in the deployment package? (Select TWO.)
A. Compiled application code
B. Java runtime environment
C. References to the event sources
D. Lambda execution role
E. Application dependencies


Answer: C. E.
Notes: To create a Lambda function, you first create a Lambda function deployment package. This package is a .zip or .jar file consisting of your code and any dependencies.
Reference: Lambda deployment packages.

Q76: A developer uses AWS CodeDeploy to deploy a Python application to a fleet of Amazon EC2 instances that run behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. What should the developer include in the CodeDeploy deployment package?
A. A launch template for the Amazon EC2 Auto Scaling group
B. A CodeDeploy AppSpec file
C. An EC2 role that grants the application access to AWS services
D. An IAM policy that grants the application access to AWS services


Answer: B.
Notes: The CodeDeploy AppSpec (application specific) file is unique to CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
Reference: CodeDeploy application specification (AppSpec) files.
Category: Deployment

Q76: A company is working on a project to enhance its serverless application development process. The company hosts applications on AWS Lambda. The development team regularly updates the Lambda code and wants to use stable code in production. Which combination of steps should the development team take to configure Lambda functions to meet both development and production requirements? (Select TWO.)

A. Create a new Lambda version every time a new code release needs testing.
B. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready unqualified Amazon Resource Name (ARN) version. Point the Development alias to the $LATEST version.
C. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to the production-ready qualified Amazon Resource Name (ARN) version. Point the Development alias to the variable LAMBDA_TASK_ROOT.
D. Create a new Lambda layer every time a new code release needs testing.
E. Create two Lambda function aliases. Name one as Production and the other as Development. Point the Production alias to a production-ready Lambda layer Amazon Resource Name (ARN). Point the Development alias to the $LATEST layer ARN.


Answer: A. B.
Notes: Lambda function versions are designed to manage deployment of functions. They can be used for code changes, without affecting the stable production version of the code. By creating separate aliases for Production and Development, systems can initiate the correct alias as needed. A Lambda function alias can be used to point to a specific Lambda function version. Using the functionality to update an alias and its linked version, the development team can update the required version as needed. The $LATEST version is the newest published version.
Reference: Lambda function versions.

For more information about Lambda layers, see Creating and sharing Lambda layers.

For more information about Lambda function aliases, see Lambda function aliases.

Category: Deployment

Q77: Each time a developer publishes a new version of an AWS Lambda function, all the dependent event source mappings need to be updated with the reference to the new version’s Amazon Resource Name (ARN). These updates are time consuming and error-prone. Which combination of actions should the developer take to avoid performing these updates when publishing a new Lambda version? (Select TWO.)
A. Update event source mappings with the ARN of the Lambda layer.
B. Point a Lambda alias to a new version of the Lambda function.
C. Create a Lambda alias for each published version of the Lambda function.
D. Point a Lambda alias to a new Lambda function alias.
E. Update the event source mappings with the Lambda alias ARN.


Answer: B. E.
Notes: A Lambda alias is a pointer to a specific Lambda function version. Instead of using ARNs for the Lambda function in event source mappings, you can use an alias ARN. You do not need to update your event source mappings when you promote a new version or roll back to a previous version.
Reference: Lambda function aliases.
Category: Deployment

Q78:  A company wants to store sensitive user data in Amazon S3 and encrypt this data at rest. The company must manage the encryption keys and use Amazon S3 to perform the encryption. How can a developer meet these requirements?
A. Enable default encryption for the S3 bucket by using the option for server-side encryption with customer-provided encryption keys (SSE-C).
B. Enable client-side encryption with an encryption key. Upload the encrypted object to the S3 bucket.
C. Enable server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Upload an object to the S3 bucket.
D. Enable server-side encryption with customer-provided encryption keys (SSE-C). Upload an object to the S3 bucket.


Answer: D.
Notes: When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory.
Reference: Protecting data using server-side encryption with customer-provided encryption keys (SSE-C).

Category: Security

Q79: A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? (Select TWO.)

A. Generate symmetric encryption keys with AWS Key Management Service (AWS KMS).
B. Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS).
C. Use generated keys with the DynamoDB Encryption Client.
D. Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs).
E. Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).


Answer: A. C.
Notes: When the DynamoDB Encryption Client is configured to use AWS KMS, it uses a customer master key (CMK) that is always encrypted when used outside of AWS KMS. This cryptographic materials provider returns a unique encryption key and signing key for every table item. This method of encryption uses a symmetric CMK.
Reference: Direct KMS Materials Provider.
Category: Deployment

Q80: A company is developing a REST API with Amazon API Gateway. Access to the API should be limited to users in the existing Amazon Cognito user pool. Which combination of steps should a developer perform to secure the API? (Select TWO.)
A. Create an AWS Lambda authorizer for the API.
B. Create an Amazon Cognito authorizer for the API.
C. Configure the authorizer for the API resource.
D. Configure the API methods to use the authorizer.
E. Configure the authorizer for the API stage.


Answer: B. D.
Notes: An Amazon Cognito authorizer should be used for integration with Amazon Cognito user pools. In addition to creating an authorizer, you are required to configure an API method to use that authorizer for the API.
Reference: Control access to a REST API using Amazon Cognito user pools as authorizer.
Category: Security

Q81: A developer is implementing a mobile app to provide personalized services to app users. The application code makes calls to Amazon S3 and Amazon Simple Queue Service (Amazon SQS). Which options can the developer use to authenticate the app users? (Select TWO.)
A. Authenticate to the Amazon Cognito identity pool directly.
B. Authenticate to AWS Identity and Access Management (IAM) directly.
C. Authenticate to the Amazon Cognito user pool directly.
D. Federate authentication by using Login with Amazon with the users managed with AWS Security Token Service (AWS STS).
E. Federate authentication by using Login with Amazon with the users managed with the Amazon Cognito user pool.


Answer: C. E.
Notes: The Amazon Cognito user pool provides direct user authentication. The Amazon Cognito user pool provides a federated authentication option with third-party identity provider (IdP), including amazon.com.
Reference: Adding User Pool Sign-in Through a Third Party.
Category: Security

 
 

Q82: A company is implementing several order processing workflows. Each workflow is implemented by using AWS Lambda functions for each task. Which combination of steps should a developer follow to implement these workflows? (Select TWO.)
A. Define a AWS Step Functions task for each Lambda function.
B. Define a AWS Step Functions task for each workflow.
C. Write code that polls the AWS Step Functions invocation to coordinate each workflow.
D. Define an AWS Step Functions state machine for each workflow.
E. Define an AWS Step Functions state machine for each Lambda function.


Answer: A. D.
Notes: Step Functions is based on state machines and tasks. A state machine is a workflow. Tasks perform work by coordinating with other AWS services, such as Lambda. A state machine is a workflow. It can be used to express a workflow as a number of states, their relationships, and their input and output. You can coordinate individual tasks with Step Functions by expressing your workflow as a finite state machine, written in the Amazon States Language.
Reference: Getting Started with AWS Step Functions.

Category: Development

Q83: A company is migrating a web service to the AWS Cloud. The web service accepts requests by using HTTP (port 80). The company wants to use an AWS Lambda function to process HTTP requests. Which application design will satisfy these requirements?
A. Create an Amazon API Gateway API. Configure proxy integration with the Lambda function.
B. Create an Amazon API Gateway API. Configure non-proxy integration with the Lambda function.
C. Configure the Lambda function to listen to inbound network connections on port 80.
D. Configure the Lambda function as a target in the Application Load Balancer target group.


Answer: D.
Notes: Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. You can use load balancer rules to route HTTP requests to a function, based on the path or the header values. Then, process the request and return an HTTP response from your Lambda function.
Reference: Using AWS Lambda with an Application Load Balancer.
Category: Development

Q84: A company is developing an image processing application. When an image is uploaded to an Amazon S3 bucket, a number of independent and separate services must be invoked to process the image. The services do not have to be available immediately, but they must process every image. Which application design satisfies these requirements?
A. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Each service pulls the message from the same queue.
B. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Each service subscribes to the same topic.
C. Configure an Amazon S3 event notification that publishes to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe a separate Amazon Simple Notification Service (Amazon SNS) topic for each service to an Amazon SQS queue.
D. Configure an Amazon S3 event notification that publishes to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe a separate Simple Queue Service (Amazon SQS) queue for each service to the Amazon SNS topic.


Answer: D.
Notes: Each service can subscribe to an individual Amazon SQS queue, which receives an event notification from the Amazon SNS topic. This is a fanout architectural implementation.
Reference: Common Amazon SNS scenarios.
Category: Development

Q85: A developer wants to implement Amazon EC2 Auto Scaling for a Multi-AZ web application. However, the developer is concerned that user sessions will be lost during scale-in events. How can the developer store the session state and share it across the EC2 instances?
A. Write the sessions to an Amazon Kinesis data stream. Configure the application to poll the stream.
B. Publish the sessions to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe each instance in the group to the topic.
C. Store the sessions in an Amazon ElastiCache for Memcached cluster. Configure the application to use the Memcached API.
D. Write the sessions to an Amazon Elastic Block Store (Amazon EBS) volume. Mount the volume to each instance in the group.


Answer: C.
Notes: ElastiCache for Memcached is a distributed in-memory data store or cache environment in the cloud. It will meet the developer’s requirement of persistent storage and is fast to access.
Reference: What is Amazon ElastiCache for Memcached?

Category: Development

 
 
 

Q86: A developer is integrating a legacy web application that runs on a fleet of Amazon EC2 instances with an Amazon DynamoDB table. There is no AWS SDK for the programming language that was used to implement the web application. Which combination of steps should the developer perform to make an API call to Amazon DynamoDB from the instances? (Select TWO.)
A. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include an XML document that contains the request attributes.
B. Make an HTTPS POST request to the DynamoDB API endpoint for the AWS Region. In the request body, include a JSON document that contains the request attributes.
C. Sign the requests by using AWS access keys and Signature Version 4.
D. Use an EC2 SSH key to calculate Signature Version 4 of the request.
E. Provide the signature value through the HTTP X-API-Key header.


Answer: B. C.
Notes: The HTTPS-based low-level AWS API for DynamoDB uses JSON as a wire protocol format. When you send HTTP requests to AWS, you sign the requests so that AWS can identify who sent them. Requests are signed with your AWS access key, which consists of an access key ID and secret access key. AWS supports two signature versions: Signature Version 4 and Signature Version 2. AWS recommends the use of Signature Version 4.
Reference: Signing AWS API requests.
Category: Development

Q87: A developer has written several custom applications that read and write to the same Amazon DynamoDB table. Each time the data in the DynamoDB table is modified, this change should be sent to an external API. Which combination of steps should the developer perform to accomplish this task? (Select TWO.)
A. Configure an AWS Lambda function to poll the stream and call the external API.
B. Configure an event in Amazon EventBridge (Amazon CloudWatch Events) that publishes the change to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) data stream.
C. Create a trigger in the DynamoDB table to publish the change to an Amazon Kinesis data stream.
D. Deliver the stream to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the API to the topic.
E. Enable DynamoDB Streams on the table.


Answer: A. E.
Notes: If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. You can enable DynamoDB Streams on a table to create an event that invokes an AWS Lambda function.
Reference: Tutorial: Process New Items with DynamoDB Streams and Lambda.
Category: Monitoring

 
 
 

Q88: A company is migrating the create, read, update, and delete (CRUD) functionality of an existing Java web application to AWS Lambda. Which minimal code refactoring is necessary for the CRUD operations to run in the Lambda function?
A. Implement a Lambda handler function.
B. Import an AWS X-Ray package.
C. Rewrite the application code in Python.
D. Add a reference to the Lambda execution role.


Answer: A.
Notes: Every Lambda function needs a Lambda-specific handler. Specifics of authoring vary between runtimes, but all runtimes share a common programming model that defines the interface between your code and the runtime code. You tell the runtime which method to run by defining a handler in the function configuration. The runtime runs that method. Next, the runtime passes in objects to the handler that contain the invocation event and context, such as the function name and request ID.
Reference: Getting started with Lambda.
Category: Refactoring

Top

Q89: A company plans to use AWS log monitoring services to monitor an application that runs on premises. Currently, the application runs on a recent version of Ubuntu Server and outputs the logs to a local file. Which combination of steps should a developer perform to accomplish this goal? (Select TWO.)
A. Update the application code to include calls to the agent API for log collection.
B. Install the Amazon Elastic Container Service (Amazon ECS) container agent on the server.
C. Install the unified Amazon CloudWatch agent on the server.
D. Configure the long-term AWS credentials on the server to enable log collection by the agent.
E. Attach an IAM role to the server to enable log collection by the agent.


Answer: C. D.
Notes: The unified CloudWatch agent needs to be installed on the server. Ubuntu Server 18.04 is one of the many supported operating systems. When you install the unified CloudWatch agent on an on-premises server, you will specify a named profile that contains the credentials of the IAM user.
Reference: Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent.
Category: Monitoring

Q90: A developer wants to monitor invocations of an AWS Lambda function by using Amazon CloudWatch Logs. The developer added a number of print statements to the function code that write the logging information to the stdout stream. After running the function, the developer does not see any log data being generated. Why does the log data NOT appear in the CloudWatch logs?
A. The log data is not written to the stderr stream.
B. Lambda function logging is not automatically enabled.
C. The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
D. The Lambda function outputs the logs to an Amazon S3 bucket.


Answer: C.
Notes: The function needs permission to call CloudWatch Logs. Update the execution role to grant the permission. You can use the managed policy of AWSLambdaBasicExecutionRole.
Reference: Troubleshoot execution issues in Lambda.
Category: Monitoting

Q91: Which of the following are best practices you should implement into ongoing deployments of your application? (Select THREE.)

A. Use stage variables to manage secrets across environments
B. Create account-specific AWS SAM templates for each environment
C. Use an AutoPublish alias
D. Use traffic shifting with pre- and post-deployment hooks
E. Test throughout the pipeline


Answer: C. D. E.
Notes: Use an AutoPublish alias, Use traffic shifting with pre- and post-deployment hooks, Test throughout the pipeline
Reference: https://enoumen.com/2019/06/23/aws-solution-architect-associate-exam-prep-facts-and-summaries-questions-and-answers-dump/

Q92: You are handing off maintenance of your new serverless application to an incoming team lead. Which recommendations would you make? (Select THREE.)

A. Keep up to date with the quotas and payload sizes for each AWS service you are using

B. Analyze production access patterns to identify potential improvements

C. Design your services to extend their life as long as possible

D. Minimize changes to your production application

E. Compare the value of using the latest first-class integrations versus using Lambda between AWS services


Answer: A. B. D.

Notes: Keep up to date with the quotas and payload sizes for each AWS service you are using, 

2022 AWS Certified Developer Associate Exam Preparation: Questions and Answers Dump.

Welcome to AWS Certified Developer Associate Exam Preparation:

Definition and Objectives, Top 100 Questions and Answers dump, White papers, Courses, Labs and Training Materials, Exam info and details, References, Jobs, Others AWS Certificates

2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

What is the AWS Certified Developer Associate Exam?

This AWS Certified Developer-Associate Examination is intended for individuals who perform a Developer role. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications by using AWS

Recommended general IT knowledge
The target candidate should have the following:
– In-depth knowledge of at least one high-level programming language
– Understanding of application lifecycle management
– The ability to write code for serverless applications
– Understanding of the use of containers in the development process

Recommended AWS knowledge
The target candidate should be able to do the following:

  • Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications
  • Identify key features of AWS services
  • Understand the AWS shared responsibility model
  • Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
  • Use and interact with AWS services
  • Apply basic understanding of cloud-native applications to write code
  • Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
  • Author, maintain, and debug code modules on AWS

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:
– Design architectures (for example, distributed system, microservices)
– Design and implement CI/CD pipelines

  • Administer IAM users and groups
  • Administer Amazon Elastic Container Service (Amazon ECS)
  • Design AWS networking infrastructure (for example, Amazon VPC, AWS Direct Connect)
  • Understand compliance and licensing

Exam content
Response types
There are two types of questions on the exam:
– Multiple choice: Has one correct response and three incorrect responses (distractors)
– Multiple response: Has two or more correct responses out of five or more response options
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Developer – Associate (DVA-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.
Your score shows how you performed on the exam as a whole and whether you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context.
The percentage in each domain represents only scored content.

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
–  Commit code to a repository and invoke build, test and/or deployment actions
–  Use labels and branches for version and release management
–  Use AWS CodePipeline to orchestrate workflows against different environments
–  Apply AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeStar, and AWS
CodeDeploy for CI/CD purposes
–  Perform a roll back plan based on application deployment policy

1.2 Deploy applications using AWS Elastic Beanstalk.
–  Utilize existing supported environments to define a new application stack
–  Package the application
–  Introduce a new application version into the Elastic Beanstalk environment
–  Utilize a deployment policy to deploy an application version (i.e., all at once, rolling, rolling with batch, immutable)
–  Validate application health using Elastic Beanstalk dashboard
–  Use Amazon CloudWatch Logs to instrument application logging

1.3 Prepare the application deployment package to be deployed to AWS.
–  Manage the dependencies of the code module (like environment variables, config files and static image files) within the package
–  Outline the package/container directory structure and organize files appropriately
–  Translate application resource requirements to AWS infrastructure parameters (e.g., memory, cores)

1.4 Deploy serverless applications.
–  Given a use case, implement and launch an AWS Serverless Application Model (AWS SAM) template
–  Manage environments in individual AWS services (e.g., Differentiate between Development, Test, and Production in Amazon API Gateway)

Domain 2: Security
2.1 Make authenticated calls to AWS services.
–  Communicate required policy based on least privileges required by application.
–  Assume an IAM role to access a service
–  Use the software development kit (SDK) credential provider on-premises or in the cloud to access AWS services (local credentials vs. instance roles)

2.2 Implement encryption using AWS services.
– Encrypt data at rest (client side; server side; envelope encryption) using AWS services
–  Encrypt data in transit

2.3 Implement application authentication and authorization.
– Add user sign-up and sign-in functionality for applications with Amazon Cognito identity or user pools
–  Use Amazon Cognito-provided credentials to write code that access AWS services.
–  Use Amazon Cognito sync to synchronize user profiles and data
–  Use developer-authenticated identities to interact between end user devices, backend
authentication, and Amazon Cognito

Domain 3: Development with AWS Services
3.1 Write code for serverless applications.
– Compare and contrast server-based vs. serverless model (e.g., micro services, stateless nature of serverless applications, scaling serverless applications, and decoupling layers of serverless applications)
– Configure AWS Lambda functions by defining environment variables and parameters (e.g., memory, time out, runtime, handler)
– Create an API endpoint using Amazon API Gateway
–  Create and test appropriate API actions like GET, POST using the API endpoint
–  Apply Amazon DynamoDB concepts (e.g., tables, items, and attributes)
–  Compute read/write capacity units for Amazon DynamoDB based on application requirements
–  Associate an AWS Lambda function with an AWS event source (e.g., Amazon API Gateway, Amazon CloudWatch event, Amazon S3 events, Amazon Kinesis)
–  Invoke an AWS Lambda function synchronously and asynchronously

3.2 Translate functional requirements into application design.
– Determine real-time vs. batch processing for a given use case
– Determine use of synchronous vs. asynchronous for a given use case
– Determine use of event vs. schedule/poll for a given use case
– Account for tradeoffs for consistency models in an application design

Domain 4: Refactoring
4.1 Optimize applications to best use AWS services and features.
 Implement AWS caching services to optimize performance (e.g., Amazon ElastiCache, Amazon API Gateway cache)
 Apply an Amazon S3 naming scheme for optimal read performance

4.2 Migrate existing application code to run on AWS.
– Isolate dependencies
– Run the application as one or more stateless processes
– Develop in order to enable horizontal scalability
– Externalize state

Domain 5: Monitoring and Troubleshooting

5.1 Write code that can be monitored.
– Create custom Amazon CloudWatch metrics
– Perform logging in a manner available to systems operators
– Instrument application source code to enable tracing in AWS X-Ray

5.2 Perform root cause analysis on faults found in testing or production.
– Interpret the outputs from the logging mechanism in AWS to identify errors in logs
– Check build and testing history in AWS services (e.g., AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline) to identify issues
– Utilize AWS services (e.g., Amazon CloudWatch, VPC Flow Logs, and AWS X-Ray) to locate a specific faulty component

Which key tools, technologies, and concepts might be covered on the exam?

The following is a non-exhaustive list of the tools and technologies that could appear on the exam.
This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
– Analytics
– Application Integration
– Containers
– Cost and Capacity Management
– Data Movement
– Developer Tools
– Instances (virtual machines)
– Management and Governance
– Networking and Content Delivery
– Security
– Serverless

AWS services and features

Analytics:
– Amazon Elasticsearch Service (Amazon ES)
– Amazon Kinesis
Application Integration:
– Amazon EventBridge (Amazon CloudWatch Events)
– Amazon Simple Notification Service (Amazon SNS)
– Amazon Simple Queue Service (Amazon SQS)
– AWS Step Functions

Compute:
– Amazon EC2
– AWS Elastic Beanstalk
– AWS Lambda

Containers:
– Amazon Elastic Container Registry (Amazon ECR)
– Amazon Elastic Container Service (Amazon ECS)
– Amazon Elastic Kubernetes Services (Amazon EKS)

Database:
– Amazon DynamoDB
– Amazon ElastiCache
– Amazon RDS

Developer Tools:
– AWS CodeArtifact
– AWS CodeBuild
– AWS CodeCommit
– AWS CodeDeploy
– Amazon CodeGuru
– AWS CodePipeline
– AWS CodeStar
– AWS Fault Injection Simulator
– AWS X-Ray

Management and Governance:
– AWS CloudFormation
– Amazon CloudWatch

Networking and Content Delivery:
– Amazon API Gateway
– Amazon CloudFront
– Elastic Load Balancing

Security, Identity, and Compliance:
– Amazon Cognito
– AWS Identity and Access Management (IAM)
– AWS Key Management Service (AWS KMS)

Storage:
– Amazon S3

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam.
These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:
– AWS Application Discovery Service
– Amazon AppStream 2.0
– Amazon Chime
– Amazon Connect
– AWS Database Migration Service (AWS DMS)
– AWS Device Farm
– Amazon Elastic Transcoder
– Amazon GameLift
– Amazon Lex
– Amazon Machine Learning (Amazon ML)
– AWS Managed Services
– Amazon Mobile Analytics
– Amazon Polly

– Amazon QuickSight
– Amazon Rekognition
– AWS Server Migration Service (AWS SMS)
– AWS Service Catalog
– AWS Shield Advanced
– AWS Shield Standard
– AWS Snow Family
– AWS Storage Gateway
– AWS WAF
– Amazon WorkMail
– Amazon WorkSpaces

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer – Associate Practice Questions And Answers Dump

Q0: Your application reads commands from an SQS queue and sends them to web services hosted by your
partners. When a partner’s endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.
How can you accommodate the partners’ broken web services without wasting your resources?

  • A. Create a delay queue and set DelaySeconds to 30 seconds
  • B. Requeue the message with a VisibilityTimeout of 30 seconds.
  • C. Create a dead letter queue and set the Maximum Receives to 3.
  • D. Requeue the message with a DelaySeconds of 30 seconds.
2022 AWS Certified Developer Associate Exam Preparation:  Questions and Answers Dump
AWS Developer Associates DVA-C01 PRO
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 


C. After a message is taken from the queue and returned for the maximum number of retries, it is
automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

Reference: Amazon SQS Dead-Letter Queues


Top

Q1: A developer is writing an application that will store data in a DynamoDB table. The ratio of reads operations to write operations will be 1000 to 1, with the same data being accessed frequently.
What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. The AWS Documentation mentions the following:

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

Q2: You are creating a DynamoDB table with the following attributes:

  • PurchaseOrderNumber (partition key)
  • CustomerID
  • PurchaseDate
  • TotalPurchaseValue

One of your applications must retrieve items from the table to calculate the total value of purchases for a
particular customer over a date range. What secondary index do you need to add to the table?

  • A. Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • B. Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute
  • C. Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the
    TotalPurchaseValue attribute
  • D. Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the
    TotalPurchaseValue attribute


C. The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition
key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the
TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

Reference: AWS DynamoDB Global Secondary Indexes

Difference between local and global indexes in DynamoDB

    • Global secondary index — an index with a hash and range key that can be different from those on the table. A global secondary index is considered “global” because queries on the index can span all of the data in a table, across all partitions.
    • Local secondary index — an index that has the same hash key as the table, but a different range key. A local secondary index is “local” in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
    • Local Secondary Indexes still rely on the original Hash Key. When you supply a table with hash+range, think about the LSI as hash+range1, hash+range2.. hash+range6. You get 5 more range attributes to query on. Also, there is only one provisioned throughput.
    • Global Secondary Indexes defines a new paradigm – different hash/range keys per index.
      This breaks the original usage of one hash key per table. This is also why when defining GSI you are required to add a provisioned throughput per index and pay for it.
    • Local Secondary Indexes can only be created when you are creating the table, there is no way to add Local Secondary Index to an existing table, also once you create the index you cannot delete it.
    • Global Secondary Indexes can be created when you create the table and added to an existing table, deleting an existing Global Secondary Index is also allowed.

Throughput :

  • Local Secondary Indexes consume throughput from the table. When you query records via the local index, the operation consumes read capacity units from the table. When you perform a write operation (create, update, delete) in a table that has a local index, there will be two write operations, one for the table another for the index. Both operations will consume write capacity units from the table.
  • Global Secondary Indexes have their own provisioned throughput, when you query the index the operation will consume read capacity from the index, when you perform a write operation (create, update, delete) in a table that has a global index, there will be two write operations, one for the table another for the index*.


Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q3: When referencing the remaining time left for a Lambda function to run within the function’s code you would use:

  • A. The event object
  • B. The timeLeft object
  • C. The remains object
  • D. The context object


D. The context object.

Reference: AWS Lambda


Top

Q4: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context
D. event, context
def handler_name(event, context):

return some_value

Reference: AWS Lambda Function Handler in Python

Top

Q5: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda Deployment Package

Top

Q6: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime

C. Function code and libraries not included within the runtime

Reference: AWS Lambda Deployment Package in PowerShell

Top

Q7: You are attempting to SSH into an EC2 instance that is located in a public subnet. However, you are currently receiving a timeout error trying to connect. What could be a possible cause of this connection issue?

  • A. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic, but does not have an outbound rule that allows SSH traffic.
  • B. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND has an outbound rule that explicitly denies SSH traffic.
  • C. The security group associated with the EC2 instance has an inbound rule that allows SSH traffic AND the associated NACL has both an inbound and outbound rule that allows SSH traffic.
  • D. The security group associated with the EC2 instance does not have an inbound rule that allows SSH traffic AND the associated NACL does not have an outbound rule that allows SSH traffic.


D. Security groups are stateful, so you do NOT have to have an explicit outbound rule for return requests. However, NACLs are stateless so you MUST have an explicit outbound rule configured for return request.

Reference: Comparison of Security Groups and Network ACLs

AWS Security Groups and NACL


Top

Q8: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.

Reference: AWS Network Address Translation Gateway


Top

Q9: What feature of VPC networking should you utilize if you want to create “elasticity” in your application’s architecture?

  • A. Security Groups
  • B. Route Tables
  • C. Elastic Load Balancer
  • D. Auto Scaling


D. Auto scaling is designed specifically with elasticity in mind. Auto scaling allows for the increase and decrease of compute power based on demand, thus creating elasticity in the architecture.

Reference: AWS Autoscalling


Top

Q10: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only from a directly uploaded zip file
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere

D. From a zip file in AWS S3 or uploaded directly from elsewhere

Reference: AWS Lambda

Top

Q11: You’re writing a script with an AWS SDK that uses the AWS API Actions and want to create AMIs for non-EBS backed AMIs for you. Which API call should occurs in the final process of creating an AMI?

  • A. RegisterImage
  • B. CreateImage
  • C. ami-register-image
  • D. ami-create-image

A. It is actually – RegisterImage. All AWS API Actions will follow the capitalization like this and don’t have hyphens in them.

Reference: API RegisterImage

Top

Q12: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

  • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
  • B. Permenantly assigning users to specific instances and always routing their traffic to those instances
  • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
  • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

Top

Q13: Which API call would best be used to describe an Amazon Machine Image?

  • A. ami-describe-image
  • B. ami-describe-images
  • C. DescribeImage
  • D. DescribeImages

D. In general, API actions stick to the PascalCase style with the first letter of every word capitalized.

Reference: API DescribeImages

Top

Q14: What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Autoscaling requires using Amazon EBS-backed instances
  • B. Virtual Private Cloud requires EBS backed instances
  • C. Amazon EBS-backed instances can be stopped and restarted without losing data
  • D. Instance-store backed instances can be stopped and restarted without losing data

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

C. Instance-store backed images use “ephemeral” storage (temporary). The storage is only available during the life of an instance. Rebooting an instance will allow ephemeral data stay persistent. However, stopping and starting an instance will remove all ephemeral storage.

Reference: What is the difference between EBS and Instance Store?

Top

Q15: After having created a new Linux instance on Amazon EC2, and downloaded the .pem file (called Toto.pem) you try and SSH into your IP address (54.1.132.33) using the following command.
ssh -i my_key.pem ec2-user@52.2.222.22
However you receive the following error.
@@@@@@@@ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@
What is the most probable reason for this and how can you fix it?

  • A. You do not have root access on your terminal and need to use the sudo option for this to work.
  • B. You do not have enough permissions to perform the operation.
  • C. Your key file is encrypted. You need to use the -u option for unencrypted not the -i option.
  • D. Your key file must not be publicly viewable for SSH to work. You need to modify your .pem file to limit permissions.

D. You need to run something like: chmod 400 my_key.pem

Reference:

Top

Q16: You have an EBS root device on /dev/sda1 on one of your EC2 instances. You are having trouble with this particular instance and you need to either Stop/Start, Reboot or Terminate the instance but you do NOT want to lose any data that you have stored on /dev/sda1. However, you are unsure if changing the instance state in any of the aforementioned ways will cause you to lose data stored on the EBS volume. Which of the below statements best describes the effect each change of instance state would have on the data you have stored on /dev/sda1?

  • A. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is not ephemeral and the data will not be lost regardless of what method is used.
  • B. If you stop/start the instance the data will not be lost. However if you either terminate or reboot the instance the data will be lost.
  • C. Whether you stop/start, reboot or terminate the instance it does not matter because data on an EBS volume is ephemeral and it will be lost no matter what method is used.
  • D. The data will be lost if you terminate the instance, however the data will remain on /dev/sda1 if you reboot or stop/start the instance because data on an EBS volume is not ephemeral.

D. The question states that an EBS-backed root device is mounted at /dev/sda1, and EBS volumes maintain information regardless of the instance state. If it was instance store, this would be a different answer.

Reference: AWS Root Device Storage

Top

Q17: EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

  • A. Can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored
  • B. Can only be used to launch EC2 instances in the same country as the AMI is stored
  • C. Can only be used to launch EC2 instances in the same AWS region as the AMI is stored
  • D. Can be used to launch EC2 instances in any AWS region

C. AMIs are only available in the region they are created. Even in the case of the AWS-provided AMIs, AWS has actually copied the AMIs for you to different regions. You cannot access an AMI from one region in another region. However, you can copy an AMI from one region to another

Reference: https://aws.amazon.com/amazon-linux-ami/

Top

Q18: Which of the following statements is true about the Elastic File System (EFS)?

  • A. EFS can scale out to meet capacity requirements and scale back down when no longer needed
  • B. EFS can be used by multiple EC2 instances simultaneously
  • C. EFS cannot be used by an instance using EBS
  • D. EFS can be configured on an instance before launch just like an IAM role or EBS volumes

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

A. and B.

Reference: https://aws.amazon.com/efs/

Top

Q19: IAM Policies, at a minimum, contain what elements?

  • A. ID
  • B. Effects
  • C. Resources
  • D. Sid
  • E. Principle
  • F. Actions

B. C. and F.

Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

Resource – Specify a list of resources to which the actions apply.

Action – Include a list of actions that the policy allows or denies.

Id, Sid aren’t required fields in IAM Policies. But they are optional fields

Reference: AWS IAM Access Policies

Top

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q20: What are the main benefits of IAM groups?

  • A. The ability to create custom permission policies.
  • B. Assigning IAM permission policies to more than one user at a time.
  • C. Easier user/policy management.
  • D. Allowing EC2 instances to gain access to S3.

B. and C.

A. is incorrect: This is a benefit of IAM generally or a benefit of IAM policies. But IAM groups don’t create policies, they have policies attached to them.

Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

Top

Q21: What are benefits of using AWS STS?

  • A. Grant access to AWS resources without having to create an IAM identity for them
  • B. Since credentials are temporary, you don’t have to rotate or revoke them
  • C. Temporary security credentials can be extended indefinitely
  • D. Temporary security credentials can be restricted to a specific region

Top

Q22: What should the Developer enable on the DynamoDB table to optimize performance and minimize costs?

  • A. Amazon DynamoDB auto scaling
  • B. Amazon DynamoDB cross-region replication
  • C. Amazon DynamoDB Streams
  • D. Amazon DynamoDB Accelerator


D. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:

  1. As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Reference: AWS DAX


Top

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 

Q23: A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently the dev environment is running on a t1 micro instance. How can the Developer change the EC2 instance type to m4.large?

  • A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large.
  • B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation.
  • C. Change the instance type to m4.large in the configuration details page of the Create New Environment page.
  • D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

B. The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.
Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.
During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest:

  • Settings applied directly to the environment – Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also applyrecommended values for some options that apply at this level unless overridden.
  • Saved Configurations
    Settings for any options that are not applied directly to the
    environment are loaded from a saved configuration, if specified.
  • Configuration Files (.ebextensions)– Settings for any options that are not applied directly to the
    environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle.

     

    Configuration files are executed in alphabetical order. For example,.ebextensions/01run.configis executed before.ebextensions/02do.config.

  • Default Values– If a configuration option has a default value, it only applies when the option is not set at any of the above levels.

If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment’s configuration. These settings can be removed with the AWS CLI or with the EB CLI
.
Settings in configuration files are not applied
directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle.

Reference: Managing ec2 features – Elastic beanstalk

Q24: What statements are true about Availability Zones (AZs) and Regions?

  • A. There is only one AZ in each AWS Region
  • B. AZs are geographically separated inside a region to help protect against natural disasters affecting more than one at a time.
  • C. AZs can be moved between AWS Regions based on your needs
  • D. There are (almost always) two or more AZs in each AWS Region

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B and D.

Reference: AWS global infrastructure/

Top

Q25: An AWS Region contains:

  • A. Edge Locations
  • B. Data Centers
  • C. AWS Services
  • D. Availability Zones


B. C. D. Edge locations are actually distinct locations that don’t explicitly fall within AWS regions.

Reference: AWS Global Infrastructure


Top

Q26: Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

  • A. Eventual Consistent Reads
  • B. Conditional reads for Consistency
  • C. Strongly Consistent Reads
  • D. Not possible


C. This is provided very clearly in the AWS documentation as shown below with regards to the read consistency for DynamoDB. Only in Strong Read consistency can you be guaranteed that you get the write read value after all the writes are completed.

Reference: https://aws.amazon.com/dynamodb/faqs/


Top

Q27: You’ ve been asked to move an existing development environment on the AWS Cloud. This environment consists mainly of Docker based containers. You need to ensure that minimum effort is taken during the migration process. Which of the following step would you consider for this requirement?

  • A. Create an Opswork stack and deploy the Docker containers
  • B. Create an application and Environment for the Docker containers in the Elastic Beanstalk service
  • C. Create an EC2 Instance. Install Docker and deploy the necessary containers.
  • D. Create an EC2 Instance. Install Docker and deploy the necessary containers. Add an Autoscaling Group for scalability of the containers.


B. The Elastic Beanstalk service is the ideal service to quickly provision development environments. You can also create environments which can be used to host Docker based containers.

Reference: Create and Deploy Docker in AWS


Top

Q28: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

Q29: A security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDb table. Each sample involves 1kb of data, and the data writes are evenly distributed over time. How much write throughput is required for the target table?

  • A. 6000
  • B. 10
  • C. 3600
  • D. 600

The Cloud is the future: Get Certified now.
The AWS Certified Solution Architect Average Salary is: US $149,446/year. Get Certified with the App below:

 
#AWS #Developer #AWSCloud #DVAC01 #AWSDeveloper #AWSDev #Djamgatech
 
 
 
 
 
 

B. When you mention the write capacity of a table in Dynamo DB, you mention it as the number of 1KB writes per second. So in the above question, since the write is happening every minute, we need to divide the value of 600 by 60, to get the number of KB writes per second. This gives a value of 10.

You can specify the Write capacity in the Capacity tab of the DynamoDB table.

Reference: AWS working with tables

Q30: What two arguments does a Python Lambda handler function require?

  • A. invocation, zone
  • B. event, zone
  • C. invocation, context
  • D. event, context


D. event, context def handler_name(event, context):

return some_value
Reference: AWS Lambda Function Handler in Python

Top

Q31: Lambda allows you to upload code and dependencies for function packages:

  • A. Only from a directly uploaded zip file
  • B. Only via SFTP
  • C. Only from a zip file in AWS S3
  • D. From a zip file in AWS S3 or uploaded directly from elsewhere


D. From a zip file in AWS S3 or uploaded directly from elsewhere
Reference: AWS Lambda Deployment Package

Top

Q32: A Lambda deployment package contains:

  • A. Function code, libraries, and runtime binaries
  • B. Only function code
  • C. Function code and libraries not included within the runtime
  • D. Only libraries not included within the runtime


C. Function code and libraries not included within the runtime
Reference: AWS Lambda Deployment Package in PowerShell

Top

Q33: You have instances inside private subnets and a properly configured bastion host instance in a public subnet. None of the instances in the private subnets have a public or Elastic IP address. How can you connect an instance in the private subnet to the open internet to download system updates?

  • A. Create and assign EIP to each instance
  • B. Create and attach a second IGW to the VPC.
  • C. Create and utilize a NAT Gateway
  • D. Connect to a VPN


C. You can use a network address translation (NAT) gateway in a public subnet in your VPC to enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the instances from receiving inbound traffic initiated by someone on the Internet.
Reference: AWS Network Address Translation Gateway

Top

Q34: What feature of VPC networking should you utilize if you want to create “elasticity