AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

2022 AWS Cloud Practitioner Exam Preparation

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900


Get it on Apple Books
Get it on Apple Books

  • AZ 900 Question
    by /u/bananeaed (Microsoft Azure Certifications) on June 29, 2022 at 6:57 pm

    Just wanted to confirm if my explanation for these questions are correct: Since PaaS indicates that the cloud vendor manages the infrastructure and platform (including the OS), this statement is false, as you (the customer) can only manage the software aspect. Azure follows a pay-as-you-go model, so there are no dynamic pricing tiers. Not sure about this question, but I'm guessing Azure doesn't provide elasticity for PaaS solutions. https://preview.redd.it/2wz974r9xl891.png?width=710&format=png&auto=webp&s=df91e201a901beac9be8b056138bc5e2417bd136 submitted by /u/bananeaed [link] [comments]

  • What exam/certification should I look at taking next?
    by /u/king5055 (Microsoft Azure Certifications) on June 29, 2022 at 11:25 am

    You may have seen a few weeks ago my post where I passed the AZ-104 exam, now I'm looking at what exam I should take next. To give a bit of background, I'm currently a technical analyst (essentially technical support) and have a pretty good background in IT/Computer Science/Computing from secondary school, college and working in my current job for about 3 years. I'm looking to go into DevOps and work have kindly agreed to help me with this. For now I'll continue working as I am but will start one day a week working in a DevOps role. If it's of any relevance, for working in a DevOps role I'm starting off with learning Docker. After this the current plan is for me to then looking at learning/understanding YAML, then Terraform and then Go (note this will be all over a period of weeks/months and my main focus is on docker/containers for now). While a bit off topic if anyone can recommend some resources or courses for these as well then it would be much appreciated. I did take a look at the Azure AZ-400 on Microsoft Learn but this looks like for now a bit too much for now and really I need to get some experience before taking this. Any suggestions for what exam/certification I should look at taking next (along with courses/resources used) would be much appreciated. submitted by /u/king5055 [link] [comments]

  • AZ-104 passed! Is AZ-305 easier?
    by /u/ericjansen88 (Microsoft Azure Certifications) on June 29, 2022 at 7:55 am

    Hi guys, yesterday I passed AZ-104 with a nice score of 903. Now, on to the AZ-305. Am I right that the general consensus here on reddit is that it is easier than the 104? Also: Does this exam have labs? What is considered good learning material? submitted by /u/ericjansen88 [link] [comments]

  • Passed AZ-900 Now
    by /u/farouk8001 (Microsoft Azure Certifications) on June 29, 2022 at 7:22 am

    Thanks to Marczak and Jhon Savill and you too the reddit community, the exam was easy, I scored 955. submitted by /u/farouk8001 [link] [comments]

  • Microsoft Learn replaces MCP Portal - guide to finding what you need
    by /u/PXPJC (Microsoft Azure Certifications) on June 29, 2022 at 7:20 am

    submitted by /u/PXPJC [link] [comments]

  • [FREE - Official from Microsoft] If you have a student email you can get all these Azure fundamentals exams + MeasureUp practice tests FREE! AZ-900, DP-900, AI-900, SC-900, PL-900, MB-910, MB-920, MS-900
    by /u/Comatse (Microsoft Azure Certifications) on June 29, 2022 at 4:31 am

    https://docs.microsoft.com/en-us/learn/certifications/student-training-and-certification Free courses + practice test + cram sessions AZ-900: Microsoft Azure Fundamentals DP-900: Microsoft Azure Data Fundamentals AI-900: Microsoft Azure AI Fundamentals SC-900: Microsoft Security, Compliance, and Identity Fundamentals PL-900: Microsoft Power Platform Fundamentals MB-910: Microsoft Dynamics 365 Fundamentals (CRM) MB-920: Microsoft Dynamics 365 Fundamentals (ERP) MS-900: Microsoft 365 Fundamentals I enrolled in AZ900 hosted by University of Calgary and it looks like the university updated the way they host exams and they provided me with the above link, which gives you practice exams from MeasureUp + the official exam for a lot of Azure fundamentals courses. I thought I'd share. submitted by /u/Comatse [link] [comments]

  • Coursera ,Linkedin Learning courses , Cram Videos and practice exams for FREE for many Azure certifications
    by /u/cpnfantstk (Microsoft Azure Certifications) on June 29, 2022 at 3:00 am

    Haven't seen these sources for study mentioned here. Coursera has great courses for the 900 series Fundamental exams. You can audit these courses for FREE. I used Coursera to study for the AI-900. Linkedin Learning is another source that can be used for FREE with your library card or use their FREE trial. I recommend Pete Zerger , 15x Microsoft MVP, who just came out with the SC-900 course less than a month ago. Pete also has two excellent cram videos on Youtube for the AZ-900 and SC-900. He also has a link to bonus practice exam questions (included here) . This guy is fantastic. I had used his cram video as part of my AZ-900 exam prep. Of Course , the gent John Savill is a must to wrap things up and the MS Learning path. Just giving some more options for you folks. Browse those sites for OTHER Azure Exam not linked. Tip for Coursera: Go through the link, scroll down, audit each course in the specialization separately to access them all for FREE. Click Enroll then Audit in each. Hope this was helpful. Here are the links: Pete Zerger AZ-900 Exam cram 2021/2022 Pete Zerger Inside Cloud and Security AZ-900 Practice Exam Pete Zerger SC-900 Exam Cram Pete Zerger Inside Cloud and Security SC-900 Practice Exam Linkedin Learning SC-900 EXAM prep Part 1 (3 parts) by Pete Zerger AZ-900 Specialization Exam Prep through Coursera DP-900 Specialization Exam Prep through Coursera AI-900 Specialization Exam Prep through Coursera submitted by /u/cpnfantstk [link] [comments]

  • Failed AZ500 for the second time..
    by /u/Ruff9012 (Microsoft Azure Certifications) on June 29, 2022 at 1:57 am

    Hello everyone, I just finished my second attempt with az500 and I completely bombed it..560/1000. The first time I took the test I got 680/1000 and figured I’ll just go back and study on the areas I didn’t do to well. Manage security operations and Secure data and applications were the areas I didn’t do so well the first time around. Anyone have any studying/material recommendations? I’m also verified with sc900,az900, and az104. I was thinking of taking some other SC certs before taking a crack at az500 again? What do y’all think?!? Any advice would be great! Thanks everyone submitted by /u/Ruff9012 [link] [comments]

  • Cloud Engineer Associate
    by /u/187babe (Google Cloud Platform Certification) on June 29, 2022 at 1:34 am

    Can anyone recommend best material for getting ready for the Google Cloud Engineer Associate exam on a tight schedule (passed AWS CCP and AZ-900) submitted by /u/187babe [link] [comments]

  • Practice exams for AZ900
    by /u/kiwi833 (Microsoft Azure Certifications) on June 29, 2022 at 1:19 am

    Hi guys quick one are they any free practice exams available for az900 that you will recommend? Thanks submitted by /u/kiwi833 [link] [comments]

  • Passed AZ-900 today: sharing my personal experience and the resources I used to achieve it
    by /u/NeHeMueL (Microsoft Azure Certifications) on June 28, 2022 at 11:49 pm

    Hey folks, I'm a freshman software engineer student at ITLA, and recently I started my journey to the cloud with Azure. Just passed the AZ-900 for my Azure Fundamentals certification with an 820 score, honestly, I think I could get more, but I was kinda nervous. And I came into this with zero knowledge of Azure and cloud experience. I started studying azure and the cloud because I applied for a free 5 weeks course that my country (Dominican Republic) government with associate with Microsoft gave to student and professional. Here is my experience for anyone wondering what to expect: - Synchronous meeting 3 days a week for 5 weeks, we used Microsoft Team for that. - I was taking notes for anything doesn't was clear for me. So, I could research out on my own. - I also did a Platzi “Introduction to the cloud with Azure” course (Only because I have a subscription there, and help me a lot just for the topics that doesn't was clear for me.) - Used the MS Learn for Azure Fundamentals. I did every single unit twice, the first time in English and the second one and Spanish. - I found this GitHub labs to practices with more labs than MS Learn path - I prepared for the exam practices with the measureup Microsoft Official Practice Test AZ-900: Microsoft Azure Fundamentals. For my personal experience, my recommendation for anyone who are currently working to achieve their AZ-900 certified are: MS Learn for Azure Fundamentals. Practices with labs, so you will understand much better what you're learning with MS Learn. Finally, you can prepare for the exam with measureup, I think this can make a different when you take the certificate exam, TBH. NOTE: I personally used measureup because it was free for me because of the government course I did, but I believe there are more options and resources in all format available out there and much more important free! submitted by /u/NeHeMueL [link] [comments]

  • How to enable Azure Cosmos DB Data Discovery using Microsoft Purview
    by /u/balramprasad (Microsoft Azure Certifications) on June 28, 2022 at 10:32 pm

    submitted by /u/balramprasad [link] [comments]

  • AZ-900, SC-900, DP-900, and Ai-900 in 4 weeks!
    by /u/MNightmare13 (Microsoft Azure Certifications) on June 28, 2022 at 8:44 pm

    I know they are all foundational, but I'm stoked nevertheless! I attended the Virtual Training Days and used John Savill's Cram videos. Thank you, John, for your material! Now on to AZ-104! I am thinking about using the learning path and Savill's videos. Any recommendations from the community would be appreciated. Thanks in advance! submitted by /u/MNightmare13 [link] [comments]

  • This is confusing.
    by /u/Psycstacy (Microsoft Azure Certifications) on June 28, 2022 at 7:17 pm

    submitted by /u/Psycstacy [link] [comments]

  • Passed SC-900 with an 832
    by /u/Witty_Maintenance336 (Microsoft Azure Certifications) on June 28, 2022 at 6:32 pm

    I passed SC-900 after a week of studying. Went through MS learn, MS VTD, John Savills exam cram and Inside Cloud and Security's exam cram (both on YT). This was my first MS Cert and it really wasn't too bad. MS Learn actually was helpful. The reading only dragged for a couple of modules. The exam crams were really helpful, definitely recommend. I played those at 1.5x speed while I was doing other chores around the house. Next up is AZ-900 and then SC-300 most likely submitted by /u/Witty_Maintenance336 [link] [comments]

  • Need your help and Advice - Endpoint security role
    by /u/Krish03101991 (Microsoft Azure Certifications) on June 28, 2022 at 5:48 pm

    Hi All, I am following all the Cybersecurity posts and advices shared in this group. But, I want to share my Journey here because, I am not sure how to proceed further on my career. I want to write my hearts out !! Here, all are suggesting to complete CompTIA A+, Network+, Security+, Pentest+, OSCP, CISSP...etc and What not and take a job in SOC as an Entry Level in Cyber security field But, my case here is entirely different. Here is my job summary I am having total of 8 Years of IT Industry Experience. In this 8 years, I worked as a Windows Administrator for the first 4.5 years creating/configuring/Administering Active Directory, DNS, DHCP, Group Policy, File server, SCCM...etc Then, I shifted my career path as an Infosec Analyst where I have provided Level 1 and Admin Support for Multiple Security Tools like Symantec DLP, Tanium, Cylance, OSSEC, Bitlocker, RSA Netwitness E-CAT !! Basically, I worked as an endpoint Security Analyst for 2.5 years !! Currently I am working as Tanium Administrator in an organization for the past 1+ year I have gone through various Information security Job descriptions and they are asking minimum of 4-6 years of experience in GRC or Penetration testing or Vulnerability Management or Network security or Incident Response for my role and I am unable to apply for that because I am having Zero experience in that. I am not sure even if I put efforts on certification like Pentest+, Cysa+ , Network+, CISA...etc, I would get a real job!! Real-World scenario: Apart from experience on these tools, I have mentioned in my resume that I have knowledge on Microsoft Defender, M365, Compliance Center, Azure Information Protection and Microsoft Information Protection as I have completed my SC-400 certification. I am getting 2-3 calls for the openings of MIP/AIP and when I mentioned them, I am having experience working in DLP(I worked on Symantec DLP) but having knowledge on MIP/AIP, they politely responded, - "We are seeking for a person who is having hands-on experience on MIP/AIP". Even after completing the certification, they are not even ready to check my knowledge. This is the Bitter truth !! If I am facing this kind of discrimination for AIP/MIP (even after working in DLP), Then, Think about Penetration testing, Incident response, Malware Analysis..etc . Even If I do certification on these areas, they will surely ask for Hands-on experience. If am not having one, they will straight away reject my profile without even considering my efforts, time and sleepless nights I spent for that certification Now, straight to the point 1) I am currently stuck as Tanium Administrator and what are my career options now ? and how can I get out of it considering my experience 2) Leave Tanium aside. Consider a person who is working as an administrator in Crowdstrike Falcon or Trend Micro or Symantec Endpoint Protection or ESET or Carbon Black. What will be his future in next 10 years if any of these tools becomes extinct and new advanced security tools evolve ? It is like a Vendor lock-in !! Obviously he will lose his job as he was working as an administrator in Falcon for 8 years and due to poor performance of falcon, no company is ready to buy that product. He cannot move to other EDR or XDR and company will seek an experienced professional in that tool so they won't even consider his profile for that opening So, If there are any experienced professionals in this group, kindly help me out !! Please let me know if I continue my career as a product specialist or it is better for my career to switch some other domain and avoid Vendor Lock-ins and how to do that ? Kindly don't consider this post as some random post. It will be helpful for anyone who works as an Administrator for any vendor and as an "Endpoint Security" analyst role !! submitted by /u/Krish03101991 [link] [comments]

  • is it correct that you can utilize a basic load balancer from another subnet within the same vnet?
    by /u/Psycstacy (Microsoft Azure Certifications) on June 28, 2022 at 5:07 pm

    Practice question states that 2 VM's in an availability set within Subnet A of Vnet1 can be load balanced by a Basic load balancer that is in Subnet B of Vnet1. Does the seperate subnets here and thus address spaces, not prevent that? TIA submitted by /u/Psycstacy [link] [comments]

  • Proof I have Been Studying Way too Much
    by /u/BigTexJr (Microsoft Azure Certifications) on June 28, 2022 at 1:29 pm

    I have been studying religiously for my AZ-104 for about a month. This morning I woke up, looked at my Amazon Echo and said "Azure, turn on the lights." It took me a minute to figure out I should have said "Alexa". submitted by /u/BigTexJr [link] [comments]

  • Created a brand new, updated version of the Microsoft 365 Fundamentals, MS-900 Study Cram.
    by /u/JohnSavill (Microsoft Azure Certifications) on June 28, 2022 at 12:42 pm

    submitted by /u/JohnSavill [link] [comments]

  • Microsoft AZ-800 (Administering Windows Server Hybrid Core Infrastructure) Course
    by /u/Leather_Donut_7431 (Microsoft Azure Certifications) on June 28, 2022 at 12:20 pm

    submitted by /u/Leather_Donut_7431 [link] [comments]

  • New certification layout
    by /u/Apart-Patience (Microsoft Azure Certifications) on June 28, 2022 at 11:12 am

    Hi there, Previous month I passed my first exam and received the certification in a nice printable layout. Today I passed another exam but the layout of the certification is not good at all. Can I still print in the old layout like this: https://preview.redd.it/95agkuazhc891.png?width=768&format=png&auto=webp&s=30286ed673dc4be1b6f834a0f65adfd8c9cedc24 submitted by /u/Apart-Patience [link] [comments]

  • Practice Test for Azure 104
    by /u/unixhyde (Microsoft Azure Certifications) on June 28, 2022 at 5:12 am

    Hi all, what's the best practical exam for preparing in azure 104? thanks submitted by /u/unixhyde [link] [comments]

  • The Microsoft Certification Dashboard Now Redirects...
    by /u/notapplemaxwindows (Microsoft Azure Certifications) on June 28, 2022 at 4:33 am

    The certification dashboard now redirects to your Microsoft Learn profile! It seems to work from testing, but I cannot easily view my pending beta exams without going directly to the Pearson Vue website. It can be accessed directly from Microsoft Learn now: https://docs.microsoft.com/en-gb/learn/ submitted by /u/notapplemaxwindows [link] [comments]

  • AZ-900 + SC-900 + DP-900 + AZ-104 in one month
    by /u/kret111 (Microsoft Azure Certifications) on June 27, 2022 at 10:03 pm

    Hi. I just wanted to celebrate I passed 4 exams in one month. Of course, AZ-104 was the hardest (only 729 points). I spent ~60h in total for learning. Aiming now for AZ-305. After that I have planned one of the security associates, but don't know which one. Microsoft Certified: Security Operations Analyst Associate (SC-200) Microsoft Certified: Identity and Access Administrator Associate (SC-300) Microsoft Certified: Azure Security Engineer Associate (AZ-500) Any suggestions? submitted by /u/kret111 [link] [comments]

  • Passed AZ-900 - What to expect 2022
    by /u/Lazy_Organization899 (Microsoft Azure Certifications) on June 27, 2022 at 8:12 pm

    Just passed the AZ900 for my Azure Fundamentals certification. I came into this with zero Azure experience. I was familiar with Azure AD identities, as I manage an organizational M365, but I knew nothing outside of Azure AD. I thought I'd summarize my experience for anyone wondering what to expect. I started studying for AZ-104 and put in a solid 2 weeks. I decided to switch to AZ-900 as a stepping stone to Azure Administrator. Because I had been studying for AZ104, I didn't really study at all for AZ900 at all. I did take a few practice tests online though. My version of the AZ900 exam was 33 questions and they gave me 65 minutes to complete it. The questions were Multiple choice, Multi-select multiple choice, and drag n drop. I did not have any case study questions on my exam. All the questions were basic. Nothing on the exam was difficult at all! If anything, I was over-prepared. I scored 900/1000. submitted by /u/Lazy_Organization899 [link] [comments]

  • Passed AZ-204 today with no prior practical experience in Azure after 4 weeks of studying
    by /u/meijinraw (Microsoft Azure Certifications) on June 27, 2022 at 5:30 pm

    Hi everyone, passed my AZ-204 a few hours ago. For everyone interested here a few of my, of course NDA-friendly takeaways: In my case the Measureup Practice tests were not even close to the difficulty of the real exam, so I have mixed feelings about those, they are nice to find out some weaknesses in basic concepts but lack a certain level of detail and difficulty compared to the exam and I don't know if I would say they were worth the 130€ (including taxes) in my case. My prior experience: 0 experience with Azure itself 3 years of professional experience as fullstack developer My study material consisted of: Cloud Academy AZ-204 course Alan Rodrigues Udemy course Working through all practice questions of Measureup What would I improve or do different if I would retake the exam: I would study the docs in much more detail instead of watching so much course material I would recommend to follow the guide of Thomas Maurer, saw it too late to study all the material there but it was exactly what I was missing out on So maybe this helps some of you in your own preparation, if you have any questions left, I happily try to answer them here and good luck on your own AZ-204 exam if you take it! 🙂 submitted by /u/meijinraw [link] [comments]

  • Earn Google Cloud swag when you complete the #LearnToEarn challenge
    by (Training & Certifications) on June 27, 2022 at 4:00 pm

    The MLOps market is expected to grow to around $700m by 20251. With the Google Cloud Professional Data Engineer certification topping the list of highest paying IT certifications in 20212, there has never been a better time to grow your data and ML skills with Google Cloud. Introducing the Google Cloud #LearnToEarn challenge Starting today, you’re invited to join the data and ML #LearnToEarn challenge- a high-intensity workout for your brain.  Get the ML, data, and AI skills you need to drive speedy transformation in your current and future roles with no-cost access to over 50 hands-on labs on Google Cloud Skills Boost. Race the clock with players around the world, collect badges, and earn special swag! How to complete the #LearnToEarn challenge?The challenge will begin with a core data analyst learning track. Then each week you’ll get new tracks designed to help you explore a variety of career paths and skill sets. Keep an eye out for trivia and flash challenges too!  As you progress through the challenge and collect badges, you’ll qualify for rewards at each step of your journey. But time and supplies are limited - so join today and complete by July 19! What’s involved in the challenge? Labs range from introductory to expert level. You’ll get hands-on experience with cutting edge tech like Vertex AI and Looker, plus data differentiators like BigQuery, Tensorflow, integrations with Workspace, and AutoML Vision. The challenge starts with the basics, then gets gradually more complex as you reach each milestone. One lab takes anywhere from ten minutes to about an hour to complete. You do not have to finish all the labs at once - but do keep an eye on start and end dates. Ready to take on the challenge?Join the #LearnToEarn challengetoday!1. IDC, Market Analysis Perspective: Worldwide AI Life-Cycle Software, September 20212. Skillsoft Global Knowledge, 15 top-paying IT certifications list 2021, August 2021

  • New courses and updates from AWS Training and Certification in June 2022
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on June 23, 2022 at 4:29 pm

    Check out the latest courses and offerings from AWS Training and Certification, including courses on managing containers, serverless solutions, hybrid storage solutions, large-scale workloads, AWS configurations, machine learning fundamentals, AWS billing & cost management, and common cloud workload use cases for the financial services industry.

  • General availability: Edge Secured-Core for Windows IoT
    by Azure service updates on June 22, 2022 at 4:00 pm

    Edge Secured-Core is a certification program that extends the Secured-Core label into IoT and Edge devices.

  • The timing’s right for recent graduates to develop cloud skills
    by Kevin Kelly (AWS Training and Certification Blog) on June 21, 2022 at 6:26 pm

    Editor’s note: This post is a letter to recent graduates from Kevin Kelly, the director of Cloud Career Training Programs at Amazon Web Services (AWS). He shares his cloud education and training philosophy and how it will continue to impact our daily lives. He includes advice on cloud learning for graduates to consider while exploring

  • AWS re/Start program provides cloud education to refugees
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on June 20, 2022 at 4:13 pm

    AWS re/Start is proud to announce the launch of a new cohort in Amsterdam. On World Refugee Day, AWS, Accenture, and Refugee Talent Hub are joining forces to help refugees in The Netherlands reskill into cloud computing careers.

  • New Twitch Series – AWS Cloud Quest: Cloud Practitioner launches June 22
    by Lauren Cutlip (AWS Training and Certification Blog) on June 20, 2022 at 4:01 pm

    Interested in cloud computing but looking for a fun, interactive, and informal learning option? Join us for the free, six-episode Twitch Series, AWS Cloud Quest: Cloud Practitioner to learn Amazon Web Services (AWS) Cloud concepts in a live gaming environment.

  • Google helps Indonesia advance education on cloud, machine learning, and mobile development through Bangkit academy
    by (Training & Certifications) on June 16, 2022 at 4:00 pm

    Indonesia is leading the way for digital transformation in Southeast Asia. According to Google’s e-Conomy South East Asia report, the country’s 2030 Gross Merchandise Value - the value of online retailing to consumers -  could be twice the value of the whole of Southeast Asia today.  This growth means that many companies need more qualified IT graduates and employees with digital skills than they have today. Fast-growing tech companies need more qualified IT graduates, and employees with digital skills. According to the World Bank, Indonesia needs an additional nine million people with digital skills by 2030. The shortage of technical talent reiterates the need to invest in a reliable skills pipeline. Following years of digital talent developments in Indonesia, Google has become a supporter of Bangkit, an academy designed to produce high-caliber technical talent for Indonesian technology companies and startups. Bangkit has facilitated a multi-stakeholder collaboration between Google, government, industry, and universities across Indonesia. Last year, the President of Indonesia and the Ministry of Education and Culture, Research, and Technology, acknowledged Bangkit’s significant impact, with 3,000 students completing nearly 15,000 courses and specialisations. Building on last year’s success, Bangkit started its 2022 program in February, offering three learning paths to students:Cloud computing with Google Cloud, preparing students for the Google Associate Cloud Engineer certification. Some of the course components are also available online Mobile development with Android, preparing students for the Google Associate Android Developer exam. An online version is available here. Machine learning with Tensorflow, getting students ready to take the Tensorflow Developer certification. Some of the online courses are available here for others.Bangkit 2022 has enrolled 3,100 university students who will take a five month study course, obtaining university study credit, as well as industry certifications. The program accepts diverse cohorts of people who are passionate about preparing for a tech career in the near future, with support and encouragement for women, people with disabilities, and students from across Indonesia to apply. Since its pilot in 2019, Bangkit has been guided by three principles:  Industry-led: provides curriculum and instructors from industry experts, including Google, GoTo and Traveloka. Instructors include key figures such as Laurence Moroney (Google, Lead AI Advocate), Google Developer Experts, and other committed professionals. Immersive: combines online learning methods conducted in both individual and group settings.  Interdisciplinary: contains knowledge and best practices in tech, soft skills, and English to provide complete career readiness. The program runs from February to July 2022, and has a 900-hour curriculum throughout the 18-week learning experience. Benefits for students participating in Bangkit include:Study credit conversion Job opportunities at our career fairGoogle Cloud, TensorFlow and AAD exam vouchersIncubation funds and mentorship support from industryTowards the end of Bangkit 2022, students will team up for the Capstone Project challenge to propose solutions to some of the nation’s most pressing problems, such as environmentalism, accessibility, and more. The top 15 teams will be selected to receive funding to incubate their capstone projects. These education and career-preparedness offerings are provided at no cost.Google is partnering with industry, governments, universities, and employers to help meet the skill demands of today. From supporting the State of Ohio to offer tech skills to residents, to working with the University of Minnesota-Rochester to create a customized health sciences degree program, Google is here to help our partners prepare those they serve for a cloud-first world.

  • Steps to start your AWS Certification journey
    by Siddharth Pasumarthy (AWS Training and Certification Blog) on June 15, 2022 at 5:31 pm

    Are you contemplating pursuing an AWS Certification? Learn about the different levels of certification and how to prepare with the training resources available from AWS.

  • Unveiling the 2021 Google Cloud Partner of the Year Award Winners
    by (Training & Certifications) on June 14, 2022 at 3:50 pm

    It’s time to celebrate! Join us in congratulating the 2021 Google Cloud Partner of the Year Award winners. As cloud computing and emerging technologies improve how we connect, share information, and conduct business, these partners helped customers turn challenges into opportunities. We’re proud to work alongside our partners and support customers as they innovate their businesses and accelerate their digital transformations. Congratulations to these winners for their creative spirit, collaborative drive, and customer-first approach; we are proud to recognize you and to call you our partners!Kudos to the 2021 winners:We're proud, grateful, and—above all—excited for what's next. As our network of partners continues to grow, we invite you to learn more about the Google Cloud Partner Advantage Program and how you can get involved by visiting our partner page.Related ArticleCelebrating the winners of the 2021 Google Cloud Customer AwardsCustomers have won Google Cloud Awards for innovation, excellence and transformation during another exciting year in the cloud.Read Article

  • Google Cloud supports higher education with Cloud Digital Leader program
    by (Training & Certifications) on June 8, 2022 at 4:00 pm

    College and university faculty can now easily teach cloud literacy and digital transformation with the Cloud Digital Leader track, part of the Google Cloud career readiness program. The new track is available for eligible faculty who are preparing their students for a cloud-first workforce. As part of the track, students will build their cloud literacy and learn the value of Google Cloud in driving digital transformation, while also preparing for the Cloud Digital Leader certification exam. Apply today!Cloud Digital Leader career readiness trackThe Cloud Digital Leader career readiness track is designed to equip eligible faculty with the resources needed to prepare their students for the Cloud Digital Leader certification. This Google Cloud certification requires no previous cloud computing knowledge or hands-on experience. The training path enables students to build cloud literacy and learn how to evaluate the capabilities of Google Cloud in preparation for future job roles. The curriculumFaculty members can access this curriculum as part of the Google Cloud Career Readiness program. Faculty from eligible institutions can apply to lead students through the no-cost  program which provides access to the four-course on-demand training, hands-on practice to supplement the learning, and additional exam prep resources. Students who complete the entire program are eligible to apply for a certification exam discount. The Cloud Digital Leader track is the third program available for classroom use, joining the Associate Cloud Engineer and Data Analyst tracks. Cloud resources for your classroomReady to get started? Apply today to access the Cloud Digital Leader career readiness track for your classroom. Read the eligibility criteria for faculty. You can preview the course content at no cost.Related ArticleRead Article

  • AWS Training now available to FutureLearn’s diverse learner community
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on June 7, 2022 at 4:42 pm

    Our newest AWS Training Partner, FutureLearn, now offers two foundational courses to their diverse community of learners to take their first step toward building cloud knowledge - no prior experience necessary . . .

  • Wanna learn Cloud & Devops?
    by /u/ahmedtm1 (Google Cloud Platform Certification) on June 5, 2022 at 10:41 am

    I have created a repo that includes Books and imp notes related to GCP, Azure, AWS, Docker, K8s, and DevOps. More, exam and interview prep notes. Keep learning and Pls share. Also, feel free to contribute. Repo link: https://github.com/ahmedtariq01/Cloud-DevOps-Learning-Resources submitted by /u/ahmedtm1 [link] [comments]

  • Instructor led training for google cloud professional solution architect certification exam
    by /u/asolanki1991 (Google Cloud Platform Certification) on June 2, 2022 at 10:51 am

    Please advise which is the best instructor led training for google cloud professional solution architect certification exam . I don't want just pass exam , I want to have practical knowledge which is required in the industry. submitted by /u/asolanki1991 [link] [comments]

  • Would completing this path be enough for GCP ML Engineer Certification?
    by /u/FlanTricky8908 (Google Cloud Platform Certification) on May 29, 2022 at 7:01 am

    I am going through this learning path offered by Google itself: https://cloud.google.com/training/machinelearning-ai/#data-scientist-learning-path Does anyone have experience with it? Will I need to study anything else before I can confidently take ML Engineer exam? submitted by /u/FlanTricky8908 [link] [comments]

  • Why IT leaders choose Google Cloud certification for their teams
    by (Training & Certifications) on May 27, 2022 at 4:00 pm

    As organizations worldwide move to the cloud, it’s become increasingly crucial to provide teams with confidence and the right skills to get the most out of cloud technology. With demand for cloud expertise exceeding the supply of talent, many businesses are looking for new, cost-effective ways to keep up.When ongoing skills gaps stifle productivity, it can cost you money. In Global Knowledge’s 2021 report, 42% of IT decision-makers reported having “difficulty meeting quality objectives” as a result of skills gaps, and, in an IDC survey cited in the same Global Knowledge report, roughly 60% of organizations described a lack of skills as a cause for lost revenue. In today’s fast-paced environment, businesses with cloud knowledge are in a stronger position to achieve more. So what more could you be doing to develop and showcase cloud expertise in your organization?Google Cloud certification helps validate your teams’ technical capabilities, while demonstrating your organization’s commitment to the fast pace of the cloud.What certification offers that experience doesn’t is peace of mind. I’m not only talking about self-confidence, but also for our customers. Having us certified, working on their projects, really gives them peace of mind that they’re working with a partner who knows what they’re doing. Niels Buekers, managing director at Fourcast BVBAWhy get your team Google Cloud certified?When you invest in cloud, you also want to invest in your people. Google Cloud certification equips your teams with the skills they need to fulfill your growing business. Speed up technology implementation Organizations want to speed up transformation and make the most of their cloud investment.Nearly 70% of partner organizations recognize that certifications speed up technology implementation and lead to greater staff productivity, according to a May 2021 IDC Software Partner Survey. The same report also found that 85% of partner IT consultants agree that “certification represents validation of extensive product and process knowledge.”Improve client satisfaction and successGetting your teams certified can be the first step to improving client satisfaction and success. Research of more than 600 IT consultants and resellers in a September 2021 IDC study found that “fully certified teams met 95% of their clients’ objectives, compared to a 36% lower average net promoter score for partially certified teams.”Motivate your team and retain talentIn today’s age of the ongoing Great Resignation, IT leaders are rightly concerned about employee attrition, which can result in stalled projects, unmet business objectives, and new or overextended team members needing time to ramp up. In other words, attrition hurts.But when IT leaders invest in skills development for their teams, talent tends to stick around. According to a business value paper from IDC, comprehensive training leads to 133% greater employee retention compared to untrained teams. When organizations help people develop skills, people stay longer, morale improves, and productivity increases. Organizations wind up with a classic win-win situation as business value accelerates. Finish your projects ahead of scheduleWith your employees feeling supported and well equipped to handle workloads, they can also stay engaged and innovate faster with Google Cloud certifications. “Fully certified teams are 35% more likely than partially certified teams to finish projects ahead of schedule, typically reaching their targets more than two weeks early,” according to research in an IDC InfoBrief.Certify your teamsGoogle Cloud certification is more than a seal of approval – it can be your framework to increase staff tenure, improve productivity, satisfy your customers, and obtain other key advantages to launch your organization into the future. Once you get your teams certified, they’ll join a trusted network of IT professionals in the Google Cloud certified community, with access to resources and continuous  learning opportunities.To discover more about the value of certification for your team, download the IDC paper today and invite your teams to join our upcoming webinar and get started on their certification journey.Related ArticleHow to become a certified cloud professionalHow to become a certified cloud professionalRead Article

  • GETTING THIS ERROR DEPLOYING FUNCTION WHAT WILL DO WNYONE TELL ME
    by /u/CutEnvironmental3615 (Google Cloud Platform Certification) on May 27, 2022 at 12:06 pm

    submitted by /u/CutEnvironmental3615 [link] [comments]

  • New courses and updates from AWS Training and Certification in May 2022
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 24, 2022 at 4:25 pm

    Check out news and updates from AWS Training and Certification for cloud learners, AWS customers, and AWS Partners for May 2022. New digital courses focus on cloud essentials, networking basics, compute, container management, and audit activities. Classroom training also is available for learning about securing workloads on the AWS Cloud and building a data warehousing solution, and there are certification updates for Advanced Networking – Specialty, Solutions Architect – Professional, and SAP on AWS – Specialty . . .

  • Public preview: Azure Communication Services APIs in US Government cloud
    by Azure service updates on May 24, 2022 at 4:00 pm

    Use Azure Communication Services APIs for voice, video, and messaging in US Government cloud.

  • New Research shows Google Cloud Skill Badges build in-demand expertise
    by (Training & Certifications) on May 19, 2022 at 4:00 pm

    We live in a digital world, and the future of work is in the cloud. In fact, 61% of HR professionals believe hiring developers will be their biggest challenge in the years ahead.1During your personal cloud journey, it’s critical to build and validate your skills in order to evolve with the rapidly changing technology and business landscape.That is why we created skill badges - a micro-credential issued by Google Cloud to demonstrate your cloud competencies and your commitment to staying on top of the latest Google Cloud solutions and products. To better understand the value of skills badges to holders’ career goals, we commissioned a third-party research firm, Gallup, to conduct a global study on the impact of Google Cloud skill badges. Skill badge earners overwhelmingly gain value from and are satisfied with Google Cloud skill badges.Skill badge holders state that they feel well equipped with the variety of skills gained through skill badge attainment, that they are more confident in their cloud skills, are excited to promote their skills to their professional network, and are able to leverage skill badges to achieve future learning goals, including a Google Cloud certification. 87% agree skill badges provided real-world, hands-on cloud experience286% agree skill badges helped build their cloud competencies2 82% agree skill badges helped showcase growing cloud skills290% agree that skill badges helped them in their Google Cloud certification journey274% plan to complete a Google Cloud certification in the next six months2Join thousands of other learners and take your career to the next level with Google Cloud skill badges.To learn more, download the Google Cloud Skills Badge Impact Report at no cost.1. McKinsey Digital,Tech Talent Technotics: Ten new realities for finding, keeping, and developing talent , 20222. Gallup Study, sponsored by Google Cloud Learning: "Google Cloud Skill Badge Impact report", May 2022Related ArticleHow to prepare for — and ace — Google’s Associate Cloud Engineer examThe Cloud Engineer Learning Path is an effective way to prepare for the Associate.Read Article

  • Top five reasons AWS Partners should take AWS Training
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 16, 2022 at 4:27 pm

    Are you new to an Amazon Web Services (AWS) Partner business and the cloud? Not sure where to start your cloud learning journey? It may feel daunting but AWS offers Partner-exclusive courses to make it easier to understand cloud fundamentals. In fewer than 30 minutes, you can begin boosting your confidence and credibility with both customers and your organization . . .

  • When Artificial Intelligence becomes more than a passion
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 5, 2022 at 6:01 pm

    Learn how AWS Certifications can help you validate your knowledge and enhance your credibility. Dipayan Das updated his artificial intelligence (AI) skills with AWS Training and Certification. He shares the resources he used and the impact of his training, including his ability to add value to his organization and clients. . .

  • If you are looking for a Job relating to azure try r/AzureJobs
    by /u/whooyeah (Microsoft Azure Certifications) on May 5, 2022 at 10:41 am

    submitted by /u/whooyeah [link] [comments]

  • GCP Certification missing certificates
    by /u/ProtossforAiur (Google Cloud Platform Certification) on May 2, 2022 at 8:31 am

    These certifications are a scam. They will provide you with a link of the certificate after that they can remove the link whenever they want. If you get certified make sure you download in pdf. Google doesn't keep backup of certificates. Yes you heard that right.we asked a copy of certification which was because the link was not working they replied they couldn't submitted by /u/ProtossforAiur [link] [comments]

  • How we’re keeping up with the increasing demand for the Google Workspace Administrator role
    by (Training & Certifications) on April 29, 2022 at 4:00 pm

    We’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. To mark the moment, we sat down with Erik Geerdink from SADA to talk about how the Google Workspace Administrator role and demand for this skill set has changed over the years. Erik is a Deployment Engineer and Pod Lead. He holds a Professional Google Workspace Administrator Certificationand has worked with Google Workspace for more than six years.What was it like starting out as a Google Workspace Administrator?When I first started, I was doing Google Workspace Support as a Level 2 Administrator. At that time, there were fewer admin controls for Google Workspace. There were calendar issues, some mail routing issues, maybe a little bit of data loss prevention (DLP), but that was about it.About 5 years ago, I transferred into Google Deployment and really got to see all that went on with deploying Google Workspace and troubleshooting advanced issues. Since then, what you can accomplish in the admin console has really taken off. There’s still Gmail and Calendar configurations, but the security posture that Google offers now—they’ve really upped their game. The extent of DLP isn’t just Gmail and Drive anymore; it extends into Chat. And we’re doing a lot of Context-Aware Access to make sure users only have as much access as IT compliance allows in our deployments. Calendar interop, which allows users in different systems to see availability, has been a big area of focus as well.How has the Google Workspace Administrator role changed over the last few years? It used to be that you were a systems admin who also took care of the Google portion as well. But with Google Workspace often being the entry point to Google Cloud, we’ve had to become more knowledgeable about the platform as a whole. Now, we not only do training with Google Workspace admins for our projects, we also talk to their Google Cloud counterparts as well.Google Workspace is changing all the time, and the weekly updates that Google sends out are great. As an engineering team, every week on Wednesday, we review each Google Workspace update that’s come out to understand how they affect us, our clients, and our upcoming projects. There’s a lot to it. It’s not just a little admin role anymore. It’s a strategic technology role.What motivated you to get Google Cloud Certified?I spent the first 15 years of my career doing cold server room roles, and I knew I had to get cloudy. I wanted to work with Google, and it was a no-brainer given the organization’s reputation for innovation. I knew this certification exam was the one to get me in the door. The Professional Google Workspace Administrator certification was required to level up as an administrator and to make sure our business kept getting the most out of Google Workspace. How has the demand for certified Google Workspace Admins changed recently? Demand has absolutely gone up. We are growing so much, and we need more professionals with this certification. It’s required for all of our new hires. When I see a candidate that already has the certification, they go to the top of the list. I’ll skip all the other resumes to find someone who has this experience. We’re searching globally—not just in North America—to find the right people to fill this strategic role.Explore the new learning pathIn order to keep up with the changing demands of this role, we’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. The learning path now aligns with the improved admin console. We’ve replaced the readings with videos for a better learning experience: in total, we added 17 new videos across 5 courses to match new features and functionality. Earn the Professional Google Workspace Administrator Certification to distinguish yourself among your peers and showcase your skills.Related ArticleUnlock collaboration with Google Workspace EssentialsIntroducing Google Workspace Essentials Starter, a no-cost offering to bring modern collaboration to work.Read Article

  • How one learner earned four AWS Certifications in four months
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 28, 2022 at 4:16 pm

    Ever wonder what it takes to earn an AWS Certification? Imagine earning four in four months. Rola Dali, a senior software developer at Local Logic, shares her experience and insights about challenging herself to do just that. She breaks down the resources she found most helpful and her overall motivation to invest in her cloud learning journey . . .

  • Build your cloud skills with no-cost access to Google Cloud training on Coursera
    by (Training & Certifications) on April 28, 2022 at 4:00 pm

    Attracting talented individuals with cloud skills is critical to success, as organizations continue to adopt and optimize cloud technology. The lack of cloud expertise and experience is a top and growing challenge for businesses as they expand their cloud footprint and search for skilled talent. To help meet this need, we are now offering access to over 500 Google Cloud self-paced labs made available on Coursera. A selected collection of the most popular self-paced labs, known as projects, are available at no cost for one month from April 28 - May 29, 2022. Learners can choose their preferred format to claim one month free access to either a top Google Cloud Project, course, Specialization or Professional Certificate.What is a lab?A lab is a learning experience where you complete a scenario based use case by following a set of instructions in a specified amount of time in an interactive hands-on environment. Labs are completed in the real Google Cloud Console and other Google Cloud products using temporary credentials, as opposed to a simulation or demo environment and take 30 - 90 minutes to complete (depending on difficulty level). Our goal is to enable you to apply your new skills and be effective immediately in real-world cloud technology settings.Many of these labs, known in Coursera as projects, include a variety of tasks and activities for you to choose from to best fit your needs. Combine bite-size individual labs to create a personalized set of learning and upskilling with clear application in a sandbox environment. Labs are available for all skill levels, and cover a wide range of topics:Cloud essentialsCloud engineering and architectureMachine learningData analytics and engineeringDevOpsHere is a roundup of some popular and trending labs right now:Getting Started with Cloud Shell and gcloudKubernetes Engine: Qwik StartIntroduction to SQL for BigQuery and Cloud SQLMigrating a Monolithic Website to Microservices on Google Kubernetes EngineGet a feel for the lab experienceCreating a Virtual Machine is one of our most popular labs, taking place directly in Google Cloud Console. In this beginner level project, you will learn how to create a Google Compute Engine virtual machine and understand zones, regions and machine types. It takes 40 minutes to complete and you’ll earn a shareable certificate.As an example of more advanced content, Predict Baby Weight with TensorFlow on AI Platformrequires experience to train, evaluate and deploy a machine learning model to predict a baby’s weight. The lab activities are completed in a real cloud environment, not in a simulation or demo environment. It takes 90 minutes to complete and you will earn a shareable certificate.Kick off your no-cost learning journey todayFor direct access to self-paced labs, we recommend that you get started by taking a look at Coursera’s Collection Page, where you can browse labs/projects by our most popular topics, or explore the full catalog to find the cloud projects that are right for your career goals by browsing Google Cloud ‘projects’ on Coursera.The month of free Google Cloud learning on Coursera is available from April 28 - May 29, 2022, so join us to evolve your skill set and cloud knowledge.Ready to start your learning Google Cloud at no-cost for 30 days? Sign uphere.Related ArticleTraining more than 40 million new people on Google Cloud skillsTo help more than 40 million people build cloud skills, Google Cloud is offering limited time no-cost access to all training contentRead Article

  • 3 tier application gcp terraform code
    by /u/savetheQ (Google Cloud Platform Certification) on April 25, 2022 at 7:48 pm

    Hi folks, anyone has some sample git for 3 tier application gcp terraform code. submitted by /u/savetheQ [link] [comments]

  • Professional Cloud Architect - materials recommendations needed.
    by /u/theGrEaTmPm (Google Cloud Platform Certification) on April 24, 2022 at 10:56 am

    Hi, What materials did you use when preparing for Professional Cloud Architect? Do you have any proven materials? How much time did you spend getting ready for the exam? Thanks in advance for your help. submitted by /u/theGrEaTmPm [link] [comments]

  • How to prepare for — and ace — Google’s Associate Cloud Engineer exam
    by (Training & Certifications) on April 22, 2022 at 4:00 pm

    Do you want to get out of the server room and into the cloud? Now’s the time to sign up for our Cloud Engineer Learning Path — now with the newly refreshed Preparing for the Associate Cloud Engineer certification course — and start working toward your Associate Cloud Engineer certification. Earning your Associate Cloud Engineer certification sends a strong signal to potential employers about what you can accomplish in Google Cloud. Associate Cloud Engineers can deploy and secure applications and infrastructure, maintain enterprise solutions to ensure they meet performance metrics, and monitor the operations of multiple projects in the cloud. Associate Cloud Engineers have also demonstrated that they can use the Google Cloud Console and the command-line interface to maintain and scale deployed cloud solutions that leverage Google-managed or self-managed services on Google Cloud.Many Associate Cloud Engineers come from the on-premises world of racking and stacking servers and are ready to upgrade their skills to the cloud era. Achieving an Associate Cloud Engineer certification is a great step towards growing a career in IT, opening you up to become a cloud developer or architect, cloud security engineer, cloud systems engineer, or network engineer, among others.The Associate Cloud Engineer learning pathBefore attempting the Associate Cloud Engineer exam, we recommend that you have 6+ months hands-on experience with Google Cloud products and solutions. While you’re gaining that experience, a good way to enhance your preparation is to follow the Cloud Engineer Learning Path, which consists of on-demand courses, hands-on labs, and the opportunity to earn skill badges. Here are our recommended steps:1. Understand what’s on the exam: Review the exam guide to determine if your skills align with the topics on the exam.2. Create your study plan with the Preparing for Your Associate Cloud Engineer Journey: This course helps you structure your preparation for the Associate Cloud Engineer exam. You will learn about the Google Cloud domains covered by the exam and how to create a study plan to improve your domain knowledge.3. Start preparing:  Follow the Cloud Engineer learning path, where you’ll dive into Google Cloud services such as Compute Engine, Google Kubernetes Engine, App Engine, Cloud Storage, Cloud SQL, and BigQuery. 4. Earn skills badges: Demonstrate your growing Google Cloud skills by sharing your earned skill badges along the way. Skill badges that will help you prepare for the Associate Cloud Engineer certification include:Perform Foundational Infrastructure Tasks in Google CloudAutomating Infrastructure on Google Cloud with TerraformCreate and Manage Cloud ResourcesSet Up and Configure a Cloud Environment in Google Cloud5. Review additional resources: Test your knowledge with some sample exam questions here.6. Certify: Finally, register for the exam and select whether to take it remotely or at a nearby testing center. Start your prep to become an Associate Cloud Engineer Take the next step towards becoming a cloud engineer and develop the recommended hands-on experience by earning the recommended skill badges. Register here and get 30 days free access to the cloud engineer learning path on  Google Cloud Skills Boost!Related ArticleThis year, resolve to become a certified Professional Cloud Developer – here’s howFollow this Google Cloud Skills Boost learning path to help you earn your Google Cloud Professional Developer certification.Read Article

  • New to GCP and looking for a study group!
    by /u/sulliv16 (Google Cloud Platform Certification) on April 19, 2022 at 4:15 pm

    As the title states, I am starting my venture into GCP and would love to get connected with a few people to help with accountability and share insight as we learn! I have around 3 years working with AWS and have my solutions architect professional and security specialty very there. I know next to nothing about GCP, but am very familiar with cloud concepts and it has been my work focus the past 2 years. Let me know if you would interested to link up and start learning together! Thanks all submitted by /u/sulliv16 [link] [comments]

  • GCP Professional Cloud Architect Certification Blog.
    by /u/HamanSharma (Google Cloud Platform Certification) on April 17, 2022 at 12:24 am

    Check out the preparation guide for GCP Cloud Architect Certification with tips and resources - https://blog.reviewnprep.com/gcp-cloud-architect. Hope this helps everyone preparing for this certification. submitted by /u/HamanSharma [link] [comments]

  • Introducing the Professional Cloud Database Engineer certification
    by (Training & Certifications) on April 12, 2022 at 3:00 pm

    Today, we’re pleased to announce the new Professional Cloud Database Engineer certification, in beta, to help database engineers translate business and technical requirements into scalable and cost-effective database solutions. By participating in the beta, you will directly influence and enhance the learning and career path for other Cloud Database Engineers. And upon passing the exam, you will become one of the first Google Cloud Certified Cloud Database Engineers in the industry. The cloud database space is evolving rapidly with the worldwide cloud database market projected to reach $68.5 billion by 2026. As more databases move to fully managed cloud database services, the traditional database engineer is now being tasked to handle more nuanced and advanced functions. In fact, there is a massive need for database engineers to lead strategic decision-making and distinguish themselves with a more developed and advanced skill set than what the industry previously called for. Why the certification is importantCloud Database Engineers are critical to the success of your organization and that’s why this new certification from Google Cloud is so important. These engineers are uniquely skilled at designing, planning, testing, implementing, and monitoring databases including migration processes. Additionally, they provide the right guidance about which databases are best for a company’s specific use cases and they’re able to guide developers when making decisions about which databases to use when building applications. These engineers lead migration efforts while ensuring customers are getting the most out of their database investment.  This new certification will validate a developer’s ability to: Design scalable cloud database solutionsManage a solution that can span multiple databasesPlan and execute on database migrationsDeploy highly scalable databases in Google CloudBefore your exam, be sure to check out the exam guide to familiarize yourself with the topics covered, and round out your skills by following the Database Engineer Learning Path which includes online training, in-person classes, hands-on labs, and additional resources to help you prepare for your exam. I am excited to welcome you to the program. Sign up now and save 40% on the cost of the certification.Related ArticleGoogle Cloud’s key investment areas to accelerate your database transformationThis blog focuses on the 6 key database investment areas that help you accelerate your digital transformation journey.Read Article

  • Train your organization on Google Cloud Skills Boost
    by (Training & Certifications) on April 7, 2022 at 1:00 pm

    Enterprises are moving to cloud computing at an accelerated pace, estimating that 85% of enterprises will adopt a cloud first principle by 2025 (Gartner®, Gartner says Cloud will be the Centerpiece of the New Digital Experience, Laurence Goasduff, November 10, 2021). There are countless reasons why enterprises are moving to the cloud - from reduced IT costs and increased scalability, to improved security and efficiency. However this rapid change has presented a challenge - how will organizations build the skills they need to accelerate cloud adoption within their organization? The answer is comprehensive training. We commissioned IDC in March 2022 , an independent market intelligence firm, to write a white paper that studied the impact of comprehensive training and certification on cloud adoption. When organizations are trained they see:Significantly greater improvement in top business priorities - 133% greater improvement on employee retention and 56% greater improvement in customer experience scoresAccelerated cloud adoption, reduced time to value, and greater ROI - trained organizations are 10X more likely to implement cloud in 2 yearsGreater performance improvements - in areas like leveraging data analytics, protecting data, and jumpstarting innovationIDC White Paper, sponsored by Google Cloud Learning: "To Maximize Your Cloud Benefits, Maximize Training" - Doc #US48867222, March 2022To learn more, download the white paper.Build Team Skills in Google Cloud Skills Boost Coupling the research above with our commitment to equip more than 40 million people with cloud skills, we are excited to provide business organizations with a comprehensive platform to help address their teams’ cloud skilling needs. Google Cloud Skills Boost combines award winning learning experiences with the ability to earn credentials to validate learning, which can be managed and delivered directly by Google Cloud with enterprise level features. These features allow Organization leaders to manage access and user permissions for their team, and drive effective business outcomes using learning analytics. In addition, administrators will be able to grant access to the Google Cloud content catalog to individuals on their team. This catalog includes hundreds of courses, labs, and credentials authored by Google Cloud experts to help their teams learn and validate their cloud skills.Organizations can trial these features today through an exclusive no cost trial (based on eligibility). Contact your account team to learn more about your eligibility for the trial and how to set up your organization on Google Cloud Skills Boost. New to Google Cloud? Visit ourteam training page and complete the learning assessment to understand your team’s training needs and get connected with an account team. Ready to get started?Google Cloud Learning is committed to helping you accelerate the rate of cloud adoption in your organization through enabling team training. Contact your account team to learn more about your eligibility for the no cost trial and how to set up your organization on Google Cloud Skills Boost.  New to Google Cloud? Visit ourteam training page and complete the learning assessment to understand your team’s training needs and get connected with an account team. Click here to learn more about how comprehensive training impacts cloud adoption.GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.Related ArticleWomen Techmakers journey to Google Cloud certificationGoogle Cloud is creating more opportunities in the credentialing space with a certification journey for Ambassadors of the Women Techmake...Read Article

  • Looking for Good Practice Exams
    by /u/zeeplereddit (Google Cloud Platform Certification) on April 3, 2022 at 10:15 pm

    I have done some googling on practice exams for the Google Cloud Digital Leader exam and I have only come across the Udemy offering. I have done Udemy courses before but I have no idea what their practice exams are like. Is there anyone here with any advice or suggestions in this regard? submitted by /u/zeeplereddit [link] [comments]

  • General availability: Azure Database for PostgreSQL - Hyperscale (Citus) now FedRAMP High compliant
    by Azure service updates on March 30, 2022 at 4:01 pm

    Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure is now compliant with FedRAMP High.

  • Best Podcasts for Cert Seekers?
    by /u/zeeplereddit (Google Cloud Platform Certification) on March 24, 2022 at 10:07 pm

    Hi folks, I am greatly looking forward to embarking on my new adventure of getting several Google Certs. To that end, I am wondering what are the best podcasts to listen to during my commute back and forth from work? The types of podcasts I am hopeful of include those that discuss the exams, goes over sample questions in high detail, interviews people who have taken the test, and also, any podcasts that discuss the concepts that I will be wrapping my head around while I go after the certs. Thanks in advance! submitted by /u/zeeplereddit [link] [comments]

  • Accelerating Government Compliance with Google Cloud’s Professional Service Organization
    by (Training & Certifications) on March 21, 2022 at 5:00 pm

    Did you know that by 2025, enterprise IT spending on public cloud computing will overtake traditional IT spending? In fact, 51% of IT spend in application software, infrastructure software, business process services, and system infrastructure will transition to the public cloud, compared to 41% in 20221.. As enterprises continue to rapidly shift to the cloud, government agencies must prioritize and accelerate security and compliance implementation. In May 2021, the White House issued an Executive Order requiring US Federal agencies to accelerate cloud adoption, embrace security best practices, develop plans to implement Zero Trust architectures, and map implementation frameworks to FedRAMP. The Administration’s focus on secure cloud adoption marks a critical shift to prioritizing cybersecurity at scale. Google Cloud’s Public Sector Professional Services Organization (PSO) has committed to helping customers meet security and compliance requirements in the cloud through specialized consulting engagements. Accelerating Authority to Operate (ATO)The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011 as a government-wide program that promotes the adoption of secure cloud services across the federal government. FedRAMP provides a standardized approach to security and risk assessment for cloud technologies and federal agencies. US Federal agencies are required to utilize and implement FedRAMP cloud service offerings as part of the “Cloud First” federal cloud computing strategy.While Google Cloud provides a FedRAMP-authorized cloud services platform and a robust catalog of FedRAMP-approved products and services (92 services and counting), customers are still tasked with achieving Agency ATO for the products and services they use, and Google Cloud provides many resources to assist customers with this journey. Google Cloud’s FedRAMP package can be accessed by completing the FedRAMP Package Access Request Form and submitting it to info@fedramp.gov. Additionally, customers can use Google’s NIST 800-53 ATO Accelerator as a starting point for documenting control implementation. Finally, Google Cloud’s Public Sector PSO offers the following strategic consulting engagements to help customers streamline the Agency ATO process.Cloud Discover: FedRAMP is a six-week interactive workshop to support customers that are just getting started with the ATO process on Google Cloud. Customers are educated on FedRAMP fundamentals, Google’s security and compliance posture, and how to approach ATO on Google Cloud. Through deep-dive interviews and design sessions, PSO helps customers craft an actionable ATO plan, assess FedRAMP readiness, and develop a conceptual ATO boundary. This engagement helps organizations establish a clear understanding and roadmap for FedRAMP ATO on Google Cloud.FedRAMP Security Review is a ten to twelve week engagement that aids customers in FedRAMP operational readiness. PSO consultants perform detailed FedRAMP architecture reviews to identify potential gaps in NIST 800-53 security control implementation and Google Cloud secure architecture best practices. Findings from the security reviews are shared with the customer along with configuration guidance and recommendations. This engagement helps organizations prepare for the third-party or independent security assessment that is required for FedRAMP ATO.Cloud Deploy: FedRAMP is a multi-month engagement designed to help customers document the details of their FedRAMP System Security Plan (SSP) and corresponding NIST 800-53 security controls, in preparation for Agency ATO on Google Cloud at FedRAMP Low, Moderate, or High. PSO collaborates with customers to develop a detailed technical infrastructure design document and security control matrix capturing evidence of the FedRAMP system architecture, security control implementation, data flows and system components. PSO can also partner with a third-party assessment organization (3PAO) or an independent assessor (IA) to support customer efforts for FedRAMP security assessment. This engagement helps customer system owners prepare for Agency ATO assessment and package submission.Developing a Zero Trust StrategyIn addition to providing FedRAMP enablement, Public Sector PSO has partnered with the Google Cloud Chief Information Security Officer (CISO) team to assist organizations with developing a zero trust architecture and strategy.Zero Trust Foundations is a seven-week engagement co-delivered by Google Cloud’s CISO and PSO teams. CISO and PSO educate customers on zero trust fundamentals, Google’s journey to zero trust through BeyondCorp, and defense in depth best practices. The CISO team walks customers through a Zero Trust Assessment (ZTA) to understand the organization’s current security posture and maturity. Insights from the ZTA enable the CISO team to work with the customer to identify an ideal first-mover workload for zero trust adoption. Following the CISO ZTA, PSO facilitates a deep-dive Zero Trust Workshop (ZTW), collaborating with key customer stakeholders to develop a NIST 800-207 aligned, cloud-agnostic zero trust architecture for the identified first-mover workload. The zero trust architecture is part of a comprehensive zero trust strategy deliverable that is based on focus areas called out in the Office of Management and Budget (OMB) Federal Zero Trust Strategy released January 2022. Scaling Secure Cloud Adoption with PSOPublic Sector PSO enables customer success by sharing our technical expertise, providing cloud strategy, implementation guidance, training and enablement using our proven methodology. As enterprise IT, operations, and organizational models continue to evolve, our goal is to help government agencies accelerate their security and compliance journeys in the cloud.  To learn more about the work we are doing with the federal government, visit cloud.google.com/solutions/federal-government. 1 Gartner Says More Than Half of Enterprise IT Spending in Key Market Segments Will Shift to the Cloud by 2025

  • GCP - PCNE (Thoughts on ACG/A cloud guru) training material
    by /u/friday963 (Google Cloud Platform Certification) on March 20, 2022 at 1:21 am

    Has anyone here done the PCNE exam and used A cloud guru as their primary study resource? If so what is your thoughts on the quality of the study material, is it enough to pass the cert or was much more external resources needed? So far I've done qwiklabs and acg for the PCNE exam, I think qwiklabs has a better lab environment but acg has a better video series. Either way I've not taken the exam but have scheduled it for later this month and am trying to gauge the level of difficulty. submitted by /u/friday963 [link] [comments]

  • exam of GCP Professional Cloud Architect
    by /u/meokey (Google Cloud Platform Certification) on March 11, 2022 at 9:43 pm

    I'm working on the courses of PCA and wondering what the exam would be like ... is there hands-on lab test in the exam? Do I have to remember all these command line tools and their arguments to pass the exam? Thanks. submitted by /u/meokey [link] [comments]

  • Which video course?
    by /u/Bollox427 (Google Cloud Platform Certification) on March 8, 2022 at 8:40 pm

    I would like to learn the fundamentals of GCP and then move on to Security and ML. I know Coursera do courses but is there anyone else of note? How do other course suppliers compare to Coursera? Is Coursera seen as an official education partner for the Google Cloud? submitted by /u/Bollox427 [link] [comments]

  • Women Techmakers journey to Google Cloud certification
    by (Training & Certifications) on March 8, 2022 at 5:00 pm

    In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Here at Google, we’re excited to celebrate women from all backgrounds and are committed to increasing the number of women in the technology industry. Google’s Women Techmakers community provides visibility, community, and resources for women in technology to drive participation and innovation in the field. This is achieved by hosting events, launching resources, and piloting new initiatives with communities and partners globally. By joining Women Techmakers, you'll receive regular emails with access to resources, tools and opportunities from Google and Women Techmakers partnerships to support you in your career.Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Participants will have the opportunity to take part in a free-of-charge, 6-week cohort learning journey, including: weekly 90-minute exam guide review sessions led by a technical mentor, peer-to-peer support in the form of an Online Community, and 12 months access to Google Cloud's on-demand learning platform, Google Cloud Skills Boost. Upon completion of the coursework required in the learning journey, participants will receive a voucher for the Associate Cloud Engineer certification exam. This program, and other similar offerings such as Cloud Career Jumpstart, and the learning journey for members transitioning out of the military, are just a few examples of the investment Google Cloud is making into the future of the technology workforce. Are you interested in staying in the loop with future opportunities with Google Cloud? Join our community here.Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article

  • Study path for GCP Professional Cloud Architect
    by /u/Prime367 (Google Cloud Platform Certification) on March 7, 2022 at 4:50 pm

    Hi Folks, Thanks for your time. I have been working as AWS Architect for 4-5 years, have several AWS certifications, including the Solution architect professional. I am supporting a GCP implementation for the past year or so, and want to go for GCP Cloud Architect certification now. Need some help with Which courses are best for the GCP Cloud Architect exam? Which practice tests do we need to do. I know it's difficult to clear certifications without doing any practice tests. Thanks in advance. submitted by /u/Prime367 [link] [comments]

  • which certification should i do?
    by /u/ParticularFactor353 (Google Cloud Platform Certification) on March 7, 2022 at 4:34 pm

    background: i am a fresher just joined a company and got the ETL domain ,and working on Bigquery scripts and composer, dataflow from past 6 months now i want to do some gcp certification so where should i begin? submitted by /u/ParticularFactor353 [link] [comments]

  • AWS & Azure Certified, how to start on GCP ACE? (Advice requested)
    by /u/skelldog (Google Cloud Platform Certification) on March 6, 2022 at 5:34 am

    Sorry, I know some of this has been discussed, but as things change regulary, I would appreciate any suggestions people are willing to share. I currently hold the three Associate certs from AWS and Azure Administrator Associate. I have been in IT for longer than I care to admit. I was thinking of bypassing Cloud Digital Leader and going directly to ACE? Between work and other options, I have access to most of the popular training programs (ITPro, AcloudGuru, Lynda, Qwiklabs, Acloudguru,Whizlabs, Udemy) I see the most recommendations for the Udemy course by Dan Sullivan, is this my best choice? My time is always limited, and I would like to pick the course that gives the most bang for the buck (Or time in this case) I already purchased the tutorials Dojo self-test last time they had a sale (Jon Bonso does some great work!) I would appreciate any other suggestions anyone is willing to offer. Thanks for reading this! submitted by /u/skelldog [link] [comments]

  • Digital Cloud Leader exam vouchers
    by /u/pillairohit (Google Cloud Platform Certification) on March 3, 2022 at 5:39 pm

    Hi all. Does GCP have online webinars/trainings that gives attendees exam vouchers? Similar to Microsoft Azure online webinars for AZ900? I'm asking for the Digital Cloud Leader certification exam. Thank you for your help and time. submitted by /u/pillairohit [link] [comments]

  • General availability: Asset certification in Azure Purview data catalog
    by Azure service updates on February 28, 2022 at 5:00 pm

    Data stewards can now certify assets that meet their organization's quality standards in the Azure Purview data catalog

  • GCP Associate Cloud Engineer Study Guide
    by /u/ravikirans (Google Cloud Platform Certification) on February 21, 2022 at 12:08 pm

    https://ravikirans.com/gcp-associate-cloud-engineer-exam-study-guide/ To view all the other GCP study Guides, check here https://ravikirans.com/category/gcp/ submitted by /u/ravikirans [link] [comments]

  • Sentinel Installation
    by /u/ribcap (Google Cloud Platform Certification) on February 20, 2022 at 7:30 pm

    Hey Everyone! So I'm in the process of scheduling an exam and have created my biometric profile but can't seem to install Sentinel. Anyone else have this issue? I've tried Chrome, Firefox, and even Safari. I click on the install link and literally nothing happens....nothing downloaded or anything. Any ideas? ​ Edit: I have not actually scheduled the exam...just trying to get everything else in place first. Should I schedule the exam prior to installing Sentinel? ​ Rib submitted by /u/ribcap [link] [comments]

  • Gcp exam fee reimbursement
    by /u/Aamirmir111 (Google Cloud Platform Certification) on February 17, 2022 at 2:15 pm

    If one clears a gcp certification exam.. is there any policy for fee reimbursement?? submitted by /u/Aamirmir111 [link] [comments]

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 16, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Google Cloud Fundamentals Full Course For Beginners Only 2022 | GCP Certified
    by /u/ClayDesk (Google Cloud Platform Certification) on February 14, 2022 at 12:30 pm

    submitted by /u/ClayDesk [link] [comments]

  • Google Cloud Platform Service Comparison
    by /u/lervz_ (Google Cloud Platform Certification) on February 12, 2022 at 3:35 pm

    To anyone who has AWS/Azure background and is new to Google Cloud Platform, you will find this service comparison made by Google very helpful. AWS, Azure, GCP Service Comparison And for those who are preparing for the Google Associate Cloud Engineer Certification exam, check these resources from Tutorials Dojo. Google Certified Associate Cloud Engineer Practice Exams Google Certified Associate Cloud Engineer Study Guide Google Cloud Platform Cheat Sheets submitted by /u/lervz_ [link] [comments]

  • Unified data and ML: 5 ways to use BigQuery and Vertex AI together
    by (Training & Certifications) on February 9, 2022 at 4:00 pm

    Are you storing your data in BigQuery and interested in using that data to train and deploy models? Or maybe you’re already building ML workflows in Vertex AI, but looking to do more complex analysis of your model’s predictions? In this post, we’ll show you five integrations between Vertex AI and BigQuery, so you can store and ingest your data; build, train and deploy your ML models; and manage models at scale with built-in MLOps, all within one platform. Let’s get started!April 2022 update: You can now register and manage BigQuery ML models with Vertex AI Model Registry, a central repository to manage and govern the lifecycle of your ML models. This enables you to easily deploy your BigQuery ML models to Vertex AI for real time predictions. Learn more in this video about “ML Ops in BigQuery using Vertex AI.”Import BigQuery data into Vertex AIIf you’re using Google Cloud, chances are you have some data stored in BigQuery. When you’re ready to use this data to train a machine learning model, you can upload your BigQuery data directly into Vertex AI with a few steps in the console:You can also do this with the Vertex AI SDK:code_block[StructValue([(u'code', u'from google.cloud import aiplatform\r\n\r\ndataset = aiplatform.TabularDataset.create(\r\n display_name="my-tabular-dataset",\r\n bq_source="bq://project.dataset.table_name",\r\n)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3eb4ec975050>)])]Notice that you didn’t need to export our BigQuery data and re-import it into Vertex AI. Thanks to this integration, you can seamlessly connect your BigQuery data to Vertex AI without moving your data from the cloud.Access BigQuery public datasets This dataset integration between Vertex AI and BigQuery means that in addition to connecting your company’s own BigQuery datasets to Vertex AI, you can also utilize the 200+ publicly available datasets in BigQuery to train your own ML models. BigQuery’s public datasets cover a range of topics, including geographic, census, weather, sports, programming, healthcare, news, and more. You can use this data on its own to experiment with training models in Vertex AI, or to augment your existing data. For example, maybe you’re building a demand forecasting model and find that weather impacts demand for your product; you can join BigQuery’s public weather dataset with your organization’s sales data to train your forecasting model in Vertex AI.Below, you’ll see an example of importing the public weather data from last year to train a weather forecasting model:Accessing BigQuery data from Vertex AI Workbench notebooksData scientists often work in a notebook environment to do exploratory data analysis, create visualizations, and perform feature engineering. Within a managed Workbench notebook instance in Vertex AI, you can directly access your BigQuery data with a SQL query, or download it as a Pandas Dataframe for analysis in Python.Below, you’ll see how you can run a SQL query on a public London bikeshare dataset, then download the results of that query as a Pandas Dataframe to use in my notebook:Analyze test prediction data in BigQueryThat covers how to use BigQuery data for training models in Vertex AI. Next, we’ll look at integrations between Vertex AI and BigQuery for exporting model predictions. When you train a model in Vertex AI using AutoML, Vertex AI will split your data into training, test, and validation sets, and evaluate how your model performs on the test data. You also have the option to export your model’s test predictions to BigQuery so you can analyze them in more detail:Then, when training completes, you can examine your test data and run queries on test predictions. This can help determine areas where your model didn’t perform as well, so you can take steps to improve your data next time you train your model.Export Vertex AI batch prediction resultsWhen you have a trained model that you’re ready to use in production, there are a few options for getting predictions on that model with Vertex AI:Deploy your model to an endpoint for online predictionExport your model assets for on-device predictionRun a batch prediction job on your modelFor cases in which you have a large number of examples you’d like to send to your model for prediction, and in which latency is less of a concern, batch prediction is a great choice. When creating a batch prediction in Vertex AI, you can specify a BigQuery table as the source and destination for your prediction job: this means you’ll have one BigQuery table with the input data you want to get predictions on, and Vertex AI will write the results of your predictions to a separate BigQuery table.With these integrations, you can access BigQuery data, and build and train models. From there Vertex AI helps you:Take these models into production Automate the repeatability of your model with managed pipelines Manage your models performance and reliability over timeTrack lineage and artifacts of your models for easy-to-manage governance Apply explainability to evaluate feature attributions What’s Next?Ready to start using your BigQuery data for model training and prediction in Vertex AI? Check out these resources:Codelab: Training an AutoML model in Vertex AICodelab: Intro to Vertex AI WorkbenchDocumentation: Vertex AI batch predictionsVideo Series: AI Simplified: Vertex AIGitHub: Example NotebooksTraining: Vertex AI: Qwik StartAre there other BigQuery and Vertex AI integrations you’d like to see? Let Sara know on Twitter at @SRobTweets.Related ArticleWhat is Vertex AI? Developer advocates share moreDeveloper Advocates Priyanka Vergadia and Sara Robinson explain how Vertex AI supports your entire ML workflow—from data management all t...Read Article

  • Curso, videos o link para sacar la gcp cloud engineer associate
    by /u/ahelord (Google Cloud Platform Certification) on February 5, 2022 at 3:26 am

    Hola quisiera preguntar cuál es el mejor curso, videos o página para aprender gcp y pasar la certificación de associate submitted by /u/ahelord [link] [comments]

  • Access role-based Google Cloud training free of charge
    by (Training & Certifications) on February 3, 2022 at 5:00 pm

    Google Cloud is now offering 30 days no-cost access to Google Cloud Skills Boost, the definitive destination for skills development, to complete role-based training. Choose from the following eight learning paths, which include interactive labs and opportunities to earn skill badges to demonstrate your cloud knowledge: Getting Started with Google Cloud, Cloud Architect, Cloud Engineer, Data Analyst, Data Engineer, DevOps Engineer, Machine Learning Engineer and Cloud Developer learning path. Read below to find out more about each learning path. Getting Started with Google CloudIn this path, you’ll learn about Google Cloud fundamentals such as core infrastructure, big data and machine learning (ML). You’ll also find out how to write gcloud commands, use Cloud Shell, deploy virtual machines, and run containerized applications on Google Kubernetes Engine (GKE).Cloud ArchitectIf you’re looking to learn how to design, develop, and manage cloud solutions, this is the path for you. You’ll learn how to perform infrastructure tasks like using Cloud Monitoring, Cloud Identity and Access Management (Cloud IAM), and more. The path will end with how to architect with Google Compute Engine and GKE. For a guided walkthrough of how to get started with Cloud IAM and Monitoring, register here to join me on February 10. You’ll also have a chance to get your questions answered live by Google Cloud experts via chat. Cloud EngineerTo learn how to plan, configure, set up, and deploy cloud solutions, take this learning path. You’ll learn how to get started with Google Compute Engine, Terraform in a cloud environment, GKE, and more. Data AnalystThis learning path will teach you how to gather and analyze data to identify trends and develop valuable insights to help solve problems. You’ll be introduced to BigQuery, Looker, LookML, BigQuery ML, and Data Catalog. Data EngineerInterested in designing and building systems that collect the data used for business decisions? Select this path. You’ll learn how to modernize data lakes and data warehouses with Google Cloud. Afterwards, you will also discover how to use Dataflow for serverless data processing and more. DevOps EngineerA DevOps Engineer is responsible for defining and implementing best practices for efficient and reliable software delivery and infrastructure management. This learning path will show you how to build an SRE culture, use Google Cloud Operations Suite for DevOps, and more. Machine Learning EngineerChoose this path for courses and labs on how to design, build, productionize, optimize, operate, and maintain ML systems. You’ll discover how to use TensorFlow, MLOps tools, VertexAI, and more. Cloud DeveloperA Cloud Developer designs, builds, analyzes, and maintains cloud-native applications. This path will teach you how to use Cloud Run and Firebase for serverless app development. You’ll also learn how to deploy to Kubernetes in Google Cloud. To learn more about the basics of Google Cloud infrastructure before getting started with a learning path, register here. Ready for your role-based training? Sign up here.Related Article2022 Resolution: Learn Google Cloud, free of chargeTechnical practitioners and developers can start 2022 with free introductory training on how to use Google Cloud.Read Article

  • General availability: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 2, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Let’s have a chat about using dumps
    by /u/whooyeah (Microsoft Azure Certifications) on January 31, 2022 at 9:49 pm

    This keeps coming up recently so it’s important we have a sticky chat about it that everyone can see. Dumps are essentially cheating. They go against what the exams were designed to do in teaching you azure skills. For this reason they are also against the terms of service from Microsoft for taking the exam. It’s annoying as a professional because you will be in a job interview and hear the hiring manager say things like “MCP exams are worthless because everyone just uses dumps”. Which is heart breaking when you have spent so much time studying the subject knowledge and validating your skills with the exam. As a hiring manager it is annoying because I’ve interviewed candidates in the past with an MCSD and it was clear they had no usable information because they cheated with dumps. You will notice in the side bar rule 1. Breaking this will result in a ban. submitted by /u/whooyeah [link] [comments]

  • This year, resolve to become a certified Professional Cloud Developer – here’s how
    by (Training & Certifications) on January 28, 2022 at 5:00 pm

    Do you have a New Year’s resolution to improve your career prospects? Sign up here for 30 days no-cost access to Google Cloud Skills Boost to help you on your way to becoming a certified Professional Cloud Developer. According to third-party IT training firm Global Knowledge, two Google Cloud Certified Professional certifications topped its list of the highest-paid IT certifications in 2021. Once you register, you’ll have an opportunity to take the Cloud Developer learning path, which consists of on-demand labs and courses, coveringGoogle Cloud infrastructure fundamentals, application development in the cloud, security, monitoring and troubleshooting, Kubernetes, Cloud Run, Firebase and more. Along the way, you’ll have an opportunity to earn skill badges to demonstrate your cloud knowledge and access resources to help you prepare for the Professional Cloud Developer certification.Click to enlargeFor example, once you’ve completed the Google Cloud Fundamentals, Core Infrastructure course, in person or on-demand, you can take the Getting Started With Application Development course, where you’ll learn how to design and develop cloud-native applications that integrate managed services from Google Cloud, including Cloud Client Libraries, the Cloud SDK, and Firebase SDKs, an overview of your storage options, and best practices for using Datastore and Cloud Storage.We’re also thrilled to announce that one of the most popular trainings in the Cloud Developer path, Application Development with Cloud Run, is now available on-demand, in addition to via live instruction. This is a great chance to get up to speed on this fully-managed, serverless compute platform at your own pace. Cloud Run marries the goodness of serverless and containers, and is fast becoming one of the most powerful ways to build and run a true cloud-native application. Moving down the proposed learning path, you can show off your Google Cloud chops with skill badges that you can display as part of your Google Developer Profile alongside your membership in the Google Cloud Innovators program, on social media, and on your resumé. There are a wide variety of interesting skills badge for cloud developers like the Serverless Cloud Run Development Quest, or Deploy to Kubernetes in Google Cloud, and many of them take just a couple of hours to complete.With these classes under your belt and Skills Badges on your profile, you’ll be in a good place to start preparing for the Professional Cloud Developer certification exam, using the proposed exam guide and sample questions to show the way. Here’s to earning your certification in 2022, and to a great future!Related Article2022 Resolution: Learn Google Cloud, free of chargeTechnical practitioners and developers can start 2022 with free introductory training on how to use Google Cloud.Read Article

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus): New certifications
    by Azure service updates on January 19, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Technical Training Made Easy and Accessible, the Google Cloud way
    by (Training & Certifications) on January 14, 2022 at 12:40 pm

    Cloud engineers face a constant barrage of new cloud services, products, and innovations. By late 2021, Google Cloud alone had released thousands of new features across hundreds of services. Couple this with other technologies and service releases, and it quickly becomes a herculean task for engineers to navigate, consume, and stay current on the ever changing technology landscape. We have heard from engineers this often leads to anxiety and frustration as engineers struggle to keep up. They are faced with a plethora of training options but often lack the time and funding. Google Cloud has reinvigorated technical training to make it more informative and applicable to public sector customers and partners. We aim to maximize your training experience so you can get targeted training when you need it. The Google Cloud Public Sector Technical Learning Series addresses customer feedback and provides fun and practical training. Sessions are currently running every two weeks. “Short and sweet” technical topics geared to subjects you care aboutGeneric training doesn't always resonate with public sector technologists. Our new curriculum targets specific public sector use cases, is delivered by customer engineers, and can be accomplished in less than two hours.  This means participants can apply the learnings directly to real-life challenges quickly. Easy to find, easy to enroll Training opportunities should always be at your fingertips. Our automated training platform will ensure that you only need to enroll once. The system will automatically notify you of upcoming sessions so you can plan in advance and at your convenience. Sessions will be offered on a recurring basis to meet the needs of your organization.Fun and engagingTypical training sessions often include a sea of glazed eyes, unresponsive to basic prompts, falling asleep at our desks, we have all been there. But it doesn't have to be this way. Our goal is to infuse Google culture into our training through interactive exchanges and tangible rewards to keep participants inspired and engaged.Traditional technology training doesn’t always help you navigate the nuts and bolts of how to effectively introduce a product into an organization. But we know that technology doesn’t operate in isolation; it supports and becomes part of a living organism, managed by humans and confined by other components of an organization’s structure (e.g. existing systems or decentralized business units). Part of a larger community of like-minded engineersLearning with - and from - a community of peers is one way to overcome the challenges and complexities of applying new technology within a complex organization. We created the Public Sector Connect community for this very reason. It is one example of how we surface best practices for public sector innovators. During weekly “Coffee Hours” and working sessions, our community members share their journey and lessons learned with each other. We know that innovation evolves through iteration and diverse perspectives, and Public Sector Connect is committed to helping surface critical challenges and solutions, and connecting those who are solving similar problems. Join the community today.

  • 2022 Resolution: Learn Google Cloud, free of charge
    by (Training & Certifications) on January 12, 2022 at 5:00 pm

    Start your 2022 New Year’s resolutions by learning at no cost how to use Google Cloud with the following training opportunities:30 day access to Google Cloud Skills Boost Register by January 31, 2022 and claim 30 days free access to Google Cloud Skills Boost to complete the Getting Started with Google Cloud learning path. Google Cloud Skills Boost is the definitive destination for skills development where you can personalize learning paths, track progress, and validate your newly-earned expertise with skill badges. The Getting Started with Google Cloud learning path will give you the opportunity to earn three skill badges after you complete hands-on labs and courses designed for aspiring cloud engineers and architects. It covers the fundamentals of Google Cloud including core infrastructure, big data and ML, writing gcloud commands, using Cloud Shell, deploying virtual machines, and running containerized applications on GKE.Cloud OnBoard: half day training on getting started with Google Cloud fundamentalsAttend the Getting Started Cloud OnBoard on January 20 for a comprehensive Google Cloud orientation. Google Cloud experts will show you how to execute your compute, available storage options, how to secure your data, and available Google Cloud managed services. Cloud Study Jam: expert-guided hands-on labGoogle Cloud experts will walk you through a hands-on lab included in Google Cloud Skill Boost’s Getting Started with Google Cloud learning path when you join our Cloud Study Jam on January 27. Google Cloud experts will also answer questions live via chat during this event.Related ArticleBuild your data analytics skills with the latest no cost BigQuery trainingsTo help you make the most of BigQuery, we’re offering no cost, on-demand training opportunitiesRead Article

  • Google Cloud doubles-down on ecosystem in 2022 to meet customer demand
    by (Training & Certifications) on January 11, 2022 at 3:00 pm

    Google Cloud has been a partner-focused business from day one. As we reflect on 2021 and look forward to what’s ahead, I want to say “thank you” to our ecosystem for all of the amazing innovations and services you provided our mutual customers over the last year. In 2021, we faced unprecedented demand from businesses as they turned to the cloud to digitally transform their organizations. This surge in cloud deployments meant we increasingly turned to our ecosystem to help customers create customized implementations with our systems integrators (SIs), build packaged solutions with our independent software vendors (ISVs), or coach employees how to best use new cloud technologies with our consulting and training firms.To continue meeting growing customer demand in 2022 and beyond, I am pleased to share that we are bringing together our ecosystem and channel sales teams into a single partner organization to bring a more streamlined go-to-market approach for our partners and customers. In support of this change, we plan to more than double our spend in support of our partner ecosystem over the next few years, including rolling out increased co-innovation resources for partners, more incentives and co-marketing funds, and a larger commitment to training and enablement—all with a goal of continuing our joint momentum in the market.Providing leads and new go-to-market programs for consulting partnersThe need for highly-skilled partners to accelerate digital transformation for customers has never been greater, and our ecosystem of services partners continues to gain tremendous opportunities to deliver high-value implementation and professional services, industry solutions, and digital transformation expertise. In 2022, we are investing in our SIs by:Moving to a partner-led, partner-delivered approach for professional services needed by our customers, particularly through expanded work with partners. This will include new programs for lead generation and lead sharing with our SI partners.Increasing our investment with SIs in deploying go-to-market programs for industry-specific SI solutions, as well as creating more pre-integrated industry ISV and Google Cloud AI solutions together with our SI partners.Accelerating critical training, specialization, and certification programs in support of our goal of training 40 million new people on Google Cloud. This includes new programs for experienced practitioners, and a hybrid learning modality that combines online and in-person learning supported by Google mentors. Accelerating growth for ISV partners with more resourcesIn 2021, our ISV partners helped build unique integrations with Google Cloud capabilities in AI, ML, data, analytics, and security for our mutual customers. In fact, our marketplace third-party transaction value was up more than 500% YoY from 2020 (Q1-Q3). In 2022, we are deepening our commitment to our ISV partners’ success by:Making significant investments in new Google Cloud Marketplace functionality, including adding new technical resources that will help accelerate how ISVs distribute their apps and solutions. Coupled with this, we’re also lowering the Marketplace rate to 3% for eligible solutions, helping drive more adoption with customers. Expanding our regional sales and technical teams who are dedicated to supporting ISVs, and at the same time increasing market development funds (MDF) to drive further sales growth for our ISVs.Dedicating additional technical resources to help ISVs move to more modern SaaS delivery models, as well as to optimize and supercharge their apps for their customers by leveraging Google Cloud technologies.Creating new monetization models for ISVs using Google Distributed Cloud to deliver products across hybrid environments, multiple clouds, and at the network edge. ISVs will be able to build industry-specific 5G and edge solutions leveraging our ecosystem of telecommunication providers and 140+ Google network edge locations.Increasing funds for ISVs to accelerate customer cloud migrations by offsetting infrastructure costs during migration (ISV Cloud Acceleration Program).Launching new program incentives to drive a thriving channelSince the launch of our Partner Advantage program, we have increased funds for our channel partners tenfold. In 2021, to extend this momentum, we expanded our incentive portfolio for resellers to support their long-term growth and profitability. In 2022, we are increasing our investment in partner programs even further, including:Significantly expanding incentives to reward partners who source and grow customer engagements, and for those who deliver exceptional customer experiences and critical implementation services.Evolving to industry-standard compensation plans for our direct sellers, and rewarding our channel partners for implementation (vs. reselling) for larger enterprise customers.Significantly increasing co-marketing funding for our channel partners to accelerate demand generation and time-to-close.Growing our learning resources, including launching more than 10 new Expertises and Specializations, and expanding our certification programs for partners to deliver the highest levels of Google Cloud expertise to customers.Launching a new program for resellers to support customers via offerings on the Google Cloud Marketplace.Sharing a toolkit to bring the best of Google’s diversity, equity, and inclusion (DEI) resources to our ecosystem of partners, including programs to develop inclusive marketing strategies and deploy DEI training within their own organizations.As we kick off 2022, it’s clear that the trend of digital transformation will only continue to drive customer demand for the cloud and, more importantly, a need for services, support, and solutions from our partners. We believe that by centralizing our partner groups into a single organization and by more than doubling our spend in support of our partner ecosystem over the next few years, we will help accelerate our joint momentum in the market around the world. For more information on these new programs and resources, please reach out to your Partner Account Manager or login to your Partner Advantage portal at partneradvantage.goog.

  • Are you a multicloud engineer yet? The case for building skills on more than one cloud
    by (Training & Certifications) on January 7, 2022 at 5:00 pm

    Over the past few months, I made the choice to move from the AWS ecosystem to Google Cloud — both great clouds! — and I think it’s made me a stronger, more well-rounded technologist.But I’m just one data point in a big trend. Multicloud is an inevitability in medium-to-large organizations at this point, as I and others have been saying for awhile now. As IT footprints get more complex, you should expect to see a broader range of cloud provider requirements showing up where you work and interview. Ready or not, multicloud is happening.In fact, Hashicorp’s recent State of Cloud Strategy Survey found 76% of employers are already using multiple clouds in some fashion, with more than 50% flagging lack of skills among their employees as a top challenge to survival in the cloud.That spells opportunity for you as an engineer. But with limited time and bandwidth, where do you place your bets to ensure that you’re staying competitive in this ever-cloudier world?You could pick one cloud to get good at and stick with it; that’s a perfectly valid career bet. (And if you do bet your career on one cloud, you should totally pick Google Cloud! I have reasons!) But in this post I’m arguing that expanding your scope of professional fluency to at least two of the three major US cloud providers (Google Cloud, AWS, Microsoft Azure) opens up some unique, future-optimized career options.What do I mean by ‘multicloud fluency’? For the sake of this discussion, I’m defining “multicloud fluency” as a level of familiarity with each cloud that would enable you to, say, pass the flagship professional-level certification offered by that cloud provider–for example, Google Cloud’s Professional Cloud Architect certification or AWS’s Certified Solutions Architect Professional. Notably, I am not saying that multicloud fluency implies experience maintaining production workloads on more than one cloud, and I’ll clarify why in a minute.How does multicloud fluency make you a better cloud engineer?I asked the cloud community on Twitter to give me some examples of how knowledge of multiple clouds has helped their careers, and dozens of engineers responded with a great discussion.Turns out that even if you never incorporate services from multiple clouds in the same project — and many people don’t! — there’s still value in understanding how the other cloud lives.Learning the lingua franca of cloudI like this framing of the different cloud providers as “Romance languages” — as with human languages in the same family tree, clouds share many of the same conceptual building blocks. Adults learn primarily by analogy to things we’ve already encountered. Just as learning one programming language makes it easier to learn more, learning one cloud reduces your ramp-up time on others.More than just helping you absorb new information faster, understanding the strengths and tradeoffs of different cloud providers can help you make the best choice of services and architectures for new projects. I actually remember struggling with this at times when I worked for a consulting shop that focused exclusively on AWS. A client would ask “What if we did this on Azure?” and I really didn’t have the context to be sure. But if you have a solid foundational understanding of the landscape across the major providers, you can feel confident — and inspire confidence! — in your technical choices.Becoming a unicornTo be clear, this level of awareness isn’t common among engineering talent. That’s why people with multicloud chops are often considered “unicorns'' in the hiring market. Want to stand out in 2022? Show that you’re conversant in more than just one cloud. At the very least, it expands the market for your skills to include companies that focus on each of the clouds you know.Taking that idea to its extreme, some of the biggest advocates for the value of a multicloud resumé are consultants, which makes sense given that they often work on different clouds depending on the client project of the week. Lynn Langit, an independent consultant and one of the cloud technologists I most respect, estimates that she spends about 40% of her consulting time on Google Cloud, 40% on AWS, and 20% on Azure. Fluency across providers lets her select the engagements that are most interesting to her and allows her to recommend the technology that provides the greatest value.But don’t get me wrong: multicloud skills can also be great for your career progression if you work on an in-house engineering team. As companies’ cloud posture becomes more complex, they need technical leaders and decision-makers who comprehend their full cloud footprint. Want to become a principal engineer or engineering manager at a mid-to-large-sized enterprise or growing startup? Those roles require an organization-wide understanding of your technology landscape, and that’s probably going to include services from more than one cloud. How to multicloud-ify your careerWe’ve established that some familiarity with multiple clouds expands your career options. But learning one cloud can seem daunting enough, especially if it’s not part of your current day job. How do you chart a multicloud career path that doesn’t end with you spreading yourself too thin to be effective at anything?Get good at the core conceptsYes, all the clouds are different. But they share many of the same basic approaches to IAM, virtual networking, high availability, and more. These are portable fundamentals that you can move between clouds as needed. If you’re new to cloud, an associate-level solutions architect certification will help you cover the basics. Make sure to do hands-on labs to help make the concepts real, though — we learn much more by doing than by reading.Go deep on your primary cloudFundamentals aside, it’s really important that you have a native level of fluency in one cloud provider. You may have the opportunity to pick up multicloud skills on the job, but to get a cloud engineering role you’re almost certainly going to need to show significant expertise on a specific cloud.Note: If you’re brand new to cloud and not sure which provider to start with, my biased (but informed) recommendation is to give Google Cloud a try. It has a free tier that won’t bill you until you give permission, and the nifty project structure makes it really easy to spin up and tear down different test environments.It’s worth noting that engineering teams specialize, too; everybody has loose ends, but they’ll often try to standardize on one cloud provider as much as they can. If you work on such a team, take advantage of the opportunity to get as much hands-on experience with their preferred cloud as possible.Go broad on your secondary cloudYou may have heard of the concept of T-shaped skills. A well-rounded developer is broadly familiar with a range of relevant technologies (the horizontal part of the “T”), and an expert in a deep, specific niche. You can think of your skills on your primary cloud provider as the deep part of your “T”. (Actually, let’s be real — even a single cloud has too many services for any one person to hold in their heads at an expert level. Your niche is likely to be a subset of your primary cloud’s services: say, security or data.)We could put this a different way: build on your primary cloud, get certified on your secondary. This gives you hirable expertise on your “native” cloud and situational awareness of the rest of the market. As opportunities come up to build on that secondary cloud, you’ll be ready.I should add that several people have emphasized to me that they sense diminishing returns when keeping up with more than one secondary cloud. At some point the cognitive switching gets overwhelming and the additional learning doesn’t add much value. Perhaps the sweet spot looks like this: 1< 2 > 3.Bet on cloud-native services and multicloud toolingThe whole point of building on the cloud is to take advantage of what the cloud does best — and usually that means leveraging powerful, native managed services like Spanner and Vertex AI. On the other hand, the cloud ecosystem has now matured to the point where fantastic, open-source multicloud management tooling for wrangling those provider-specific services is readily available. (Doing containers on cloud? Probably using Kubernetes! Looking for a DevOps role? The team is probably looking for Terraform expertise no matter what cloud they major on.) By investing learning time in some of these cross-cloud tools, you open even more doors to build interesting things with the team of your choice.Multicloud and youWhen I moved into the Google Cloud world after years of being an AWS Hero, I made sure to follow a new set of Google Cloud voices like Stephanie Wong and Richard Seroter. But I didn’t ghost my AWS-using friends, either! I’m a better technologist (and a better community member) when I keep up with both ecosystems. “But I can hardly keep up with the firehose of features and updates coming from Cloud A. How will I be able to add in Cloud B?” Accept that you can’t know everything. Nobody does. Use your broad knowledge of cloud fundamentals as an index, read the docs frequently for services that you use a lot, and keep your awareness of your secondary cloud fresh:Follow a few trusted voices who can help you filter the signal from the noiseAttend a virtual event once a quarter or so; it’s never been easier to access live learningBuild a weekend side project that puts your skills into practiceUltimately, you (not your team or their technology choices!) are responsible for the trajectory of your career. If this post has raised career questions that I can help answer, please feel free to hit me up on Twitter. Let’s continue the conversation.Related ArticleFive do’s and don’ts of multicloud, according to the expertsWe talked with experts about why to do multicloud, and how to do it right. Here is what we learned.Read Article

  • Azure Database for PostgreSQL – Hyperscale (Citus): New toolkit certifications generally available
    by Azure service updates on December 15, 2021 at 5:00 pm

    New Toolkit certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Azure VMware Solution achieves FedRAMP High Authorization
    by Azure service updates on September 15, 2021 at 11:53 pm

    With this certification, U.S. government and public sector customers can now use Azure VMware Solution as a compliant FedRAMP cloud computing environment, ensuring it meets the demanding standards for security and information protection.

  • Azure expands HITRUST certification across 51 Azure regions
    by Azure service updates on August 23, 2021 at 9:38 pm

    Azure expands offering and region coverage to Azure customers with its 2021 HITRUST validated assessment.

  • Azure Database for PostgreSQL - Hyperscale (Citus) now compliant with additional certifications
    by Azure service updates on June 9, 2021 at 4:00 pm

    New certifications are now available for Hyperscale (Citus) on Azure Database for PostgreSQL, a managed service running the open-source Postgres database on Azure.

  • Azure expands PCI DSS certification
    by Azure service updates on March 15, 2021 at 5:02 pm

    You can now leverage Azure’s Payment Card Industry Data Security Standard (PCI DSS) certification across all live Azure regions.

  • 172 Azure offerings achieve HITRUST certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure expands its depth of offerings to Azure customers with its latest independent HITRUST assessment.

  • Azure achieves its first PCI 3DS certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure’s PCI 3DS Attestation of Compliance, PCI 3DS Shared Responsibility Matrix, and PCI 3DS whitepaper are now available.

  • Azure Databricks Achieves FedRAMP High Authorization on Microsoft Azure Government
    by Azure service updates on November 25, 2020 at 5:00 pm

    With this certification, customers can now use Azure Databricks to process the U.S. government’s most sensitive, unclassified data in cloud computing environments, including data that involves the protection of life and financial assets.

  • New SAP HANA Certified Memory-Optimized Virtual Machines now available
    by Azure service updates on November 12, 2020 at 5:01 pm

    We are expanding our SAP HANA certifications, enabling you to run production SAP HANA workloads on the Edsv4 virtual machines sizes.

  • Azure achieves Service Organization Controls compliance for 14 additional services
    by Azure service updates on November 11, 2020 at 5:10 pm

    Azure gives you some of the industry’s broadest certifications for the critical SOC 1, 2, and 3 compliance offering, which is widely used around the world.

  • Announcing the unified Azure Certified Device program
    by Azure service updates on September 22, 2020 at 4:05 pm

    A unified and enhanced Azure Certified Device program was announced at Microsoft Ignite, expanding on previous Microsoft certification offerings that validate IoT devices meet specific capabilities and are built to run on Azure. This program offers a low-cost opportunity for device builders to increase visibility of their products while making it easy for solution builders and end customers to find the right device for their IoT solutions.

  • IoT Security updates for September 2020
    by Azure service updates on September 22, 2020 at 4:05 pm

    New Azure IoT Security product updates include improvements around monitoring, edge nesting and the availability of Azure Defender for IoT.

  • Azure Certified for Plug and Play is now available
    by Azure service updates on August 27, 2020 at 12:21 am

    IoT Plug and Play device certification is now available from Microsoft as part of the Azure Certified device program.

  • Azure France has achieved GSMA accreditation
    by Azure service updates on August 6, 2020 at 5:45 pm

    Azure has added an important compliance offering for telecommunications in France, the Global System for Mobile Communications Association (GSMA) Security Accreditation Scheme for Subscription Management (SAS-SM).

  • Azure Red Hat OpenShift is now ISO 27001 certified
    by Azure service updates on July 21, 2020 at 4:00 pm

    To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now ISO 27001 certified.

  • Azure Lighthouse updates—April 2020
    by Azure service updates on June 1, 2020 at 4:00 pm

    Several critical updates have been made to Azure Lighthouse, including FEDRAMP certification, delegation opt-out, and Azure Backup reports.

  • Azure NetApp Files—New certifications, increased SLA, expanded regional availability
    by Azure service updates on May 19, 2020 at 4:00 pm

    The SLA guarantee for Azure NetApp Files has increased to 99.99 percent. In addition, NetApp Files is now HIPAA and FedRAMP certified, and regional availability has been increased.

  • Kubernetes on Azure Stack Hub in GA
    by Azure service updates on February 25, 2020 at 5:00 pm

    We now support Kubernetes cluster deployment on Azure Stack Hub, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS Engine on Azure Stack Hub.

  • Azure Firewall Spring 2020 updates
    by Azure service updates on February 19, 2020 at 5:00 pm

    Excerpt: Azure Firewall is now ICSA Labs certified. In addition, several key Azure Firewall capabilities have recently been released into general availability (GA) and preview.

  • Azure IoT C# and Java SDKs release new long-term support (LTS) branches
    by Azure service updates on February 14, 2020 at 5:00 pm

    The Azure IoT Java and C# SDKs have each now released new long-term support (LTS) branches.

  • HPC Cache receives ISO certifications, adds stopping feature, and new region
    by Azure service updates on February 11, 2020 at 5:00 pm

    Azure HPC Cache has received new SO27001, 27018 and 27701 certifications, adds new features to manage storage caching in performance-driven workloads and expands service access to Korea Central.

  • Azure Blueprint for FedRAMP High now available in new regions
    by Azure service updates on February 3, 2020 at 5:00 pm

    The Azure Blueprint for FedRAMP High is now available in both Azure Government and Azure Public regions. This is in addition to the Azure Blueprint for FedRAMP Moderate released in November, 2019.

  • Azure Databricks Is now HITRUST certified
    by Azure service updates on January 22, 2020 at 5:01 pm

    Azure Databricks is now certified for the HITRUST Common Security Framework (HITRUST CSF®), the most widely coveted security accreditation for the healthcare industry. With this certification, health care customers can now use volumes of clinical data to drive innovation using Azure Databricks, without any worry about security and risk.

  • Microsoft plans to establish new cloud datacenter region in Qatar
    by Azure service updates on December 11, 2019 at 8:00 pm

    Microsoft recently announced plans to establish a new cloud datacenter region in Qatar to deliver its intelligent, trusted cloud services and expand the Microsoft global cloud infrastructure to 55 cloud regions in 20 countries.

  • Azure NetApp Files HANA certification and new region availability
    by Azure service updates on November 4, 2019 at 5:00 pm

    Azure NetApp Files , one of the fastest growing bare-metal Azure services, has achieved SAP HANA certification for both scale-up and scale-out deployments.

  • Azure achieves TrueSight certification
    by Azure service updates on September 23, 2019 at 5:00 pm

    Azure achieved certification for TruSight, an industry-backed, best-practices third-party assessment utility.

  • IoT Plug and Play Preview is now available
    by Azure service updates on August 21, 2019 at 4:00 pm

    With IoT Plug and Play Preview, solution developers can start using Azure IoT Central to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play.

  • View linked GitHub activity from the Kanban board
    by Azure service updates on June 21, 2019 at 5:00 pm

    We continue to enhance the Azure Boards integration with GitHub. Now you can get information of your linked GitHub commits, pull requests and issues on your Kanban board. This information will give you a quick sense of where an item is at and allow you to directly navigate out to the GitHub commit, pull request, or issue for more details.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on April 2, 2019 at 9:08 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on March 26, 2019 at 9:06 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Azure South Africa regions are now available
    by Azure service updates on March 7, 2019 at 6:00 pm

    Azure services are available from new cloud regions in Johannesburg (South Africa North) and Cape Town (South Africa West), South Africa. The launch of these regions is a milestone for Microsoft.

  • Azure DevOps Roadmap update for 2019 Q1
    by Azure service updates on February 14, 2019 at 8:22 pm

    We updated the Features Timeline to provide visibility on our key investments for this quarter.

  • Azure Stack—FedRAMP High documentation now available
    by Azure service updates on November 1, 2018 at 7:00 pm

    FedRAMP High documentation is now available for Azure Stack customers.

  • Azure Stack Infrastructure—compliance certification guidance
    by Azure service updates on November 1, 2018 at 7:00 pm

    We have created documentation to describe how Azure Stack infrastructure satisfies regulatory technical controls for PCI-DSS and CSA-CCM.

  • Kubernetes on Azure Stack in preview
    by Azure service updates on November 1, 2018 at 7:00 pm

    We now support Kubernetes cluster deployment on Azure Stack, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS-Engine on Azure Stack.

  • Logic Apps is ISO, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant
    by Azure service updates on July 18, 2017 at 5:05 pm

    The Logic Apps feature of Azure App Service is now ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant.

  • Apache Kafka on HDInsight with Azure Managed Disks
    by Azure service updates on June 30, 2017 at 3:44 pm

    We're pleased to announce Apache Kafka with Azure Managed Disks Preview on the HDInsight platform. Users will now be able to deploy Kafka clusters with managed disks straight from the Azure portal, with no signup necessary.

  • Azure Backup for Windows Server system state
    by Azure service updates on June 14, 2017 at 10:54 pm

    Customers will now be able to to perform comprehensive, secure, and reliable Windows Server recoveries. We Will be extending the data backup capabilities of the Azure Backup agent so that it will now integrate with the Windows Server Backup feature, available natively on every Windows Server.

  • Azure Data Catalog is ISO, CSA STAR, HIPAA, EU Model Clauses compliant
    by Azure service updates on March 7, 2017 at 12:00 am

    Azure Data Catalog is ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, and EU Model Clauses compliant.

  • Azure compliance: Azure Cosmos DB certified for ISO 27001, HIPAA, and the EU Model Clauses
    by Azure service updates on March 25, 2016 at 10:00 am

    The Azure Cosmos DB team is excited to announce that Azure Cosmos DB is ISO 27001, HIPAA, and EU Model Clauses compliant.

  • Compliance updates for Azure public cloud
    by Azure service updates on March 16, 2016 at 9:24 pm

    We’re adding more certification coverage to our Azure portfolio, so regulated customers can take advantage of new services.

  • Protect and recover your production workloads in Azure
    by Azure service updates on October 2, 2014 at 5:00 pm

    With Azure Site Recovery, you can protect and recover your production workloads while saving on capital and operational expenditures.

  • ISO Certification expanded to include more Azure services
    by Azure service updates on January 17, 2014 at 1:00 am

    Azure ISO Certification expanded to include SQL Database, Active Directory, Traffic Manager, Web Sites, BizTalk Services, Media Services, Mobile Services, Service Bus, Multi-Factor Authentication, and HDInsight.


Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts

Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).

Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:

Deployment

What is a Canary Deployment?

A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.

2022 AWS Cloud Practitioner Exam Preparation

Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep
Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep #Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648 android: https://play.google.com/store/apps/dev?id=4679760081477077763 windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0 web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)

What is a Blue Green Deployment?

Reference: Blue Green Deployment

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.

How to a  software release?

There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker

How to automate the whole build and release process?

• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
manufacturing.
The above process will be done by Jenkins by creating the jobs.

Did you ever participated in Prod Deployments? If yes what is the procedure?

• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system

Devops Tools and Concepts

What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?

By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.

We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

The key aspects or principles behind DevOps are:


  • Infrastructure as a Code
  • Continuous Integration
  • Continuous Deployment
  • Automation
  • Continuous Monitoring
  • Security

Popular tools for DevOps are:

  • Git
  • AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
  • Jenkins
  • Ansible
  • Puppet
  • Nagios
  • Docker
  • ELK (Elasticsearch, Logstash, Kibana)

Can we consider DevOps as Agile methodology?

Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.

What are some of the most popular DevOps tools?
Selenium
Puppet
Chef
Git
Jenkins
Ansible
Docker

What is the job Of HTTP REST API in DevOps?

As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.

What is Scrum?

Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team

What are Micro services, and how they control proficient DevOps rehearses?

Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.

What is Continuous Delivery?

Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.

Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.


Build the skills that'll drive your salary into six figures
Devops Continuous Integration vs Continuous delivery

Why Automate?

Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.

What is Puppet?

Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.

What is Configuration Management?

Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.

Software Configuration Management Features are:

• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes

What are the Some Of the Most Popular Devops Tools ?

• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible

What Are the Vagrant And Its Uses?

Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.


What’s a PTR in DNS?

Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.

What testing is necessary to insure a new service is ready for production?

Continuous testing

What is Continuous Testing?

It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.


What are the key elements of continuous testing?

Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.

How does HTTP work?

The HTTP protocol  works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that  respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.

What is IaC? How you will achieve this?

Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.

Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.

What are patterns and anti-patterns of software delivery and deployment?

What are patterns and anti-patterns of software delivery and deployment?

What are Microservices?

Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.

  • Small
  • Decoupled
  • Owned by self-contained teams

Version Control

What is a version control system?

Version Control System (VCS) is a software that helps software developers to work together and maintain
  complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN

What is Source Control?

An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.

Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can

• Collaborate on code with your team.

• Isolate your work until it is ready.


. Quickly troubleshoot issues by identifying who made changes and what the changes were.

Source code management systems help streamline the development process and provide a centralized source for all your code.

What is Git and explain the difference between Git and SVN?

Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.

GIT SVN
Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Git contains the local repo as well as the full history of the whole project on all the developers hard drive, so if there is a server outage , you can easily do recovery from your team mates local git repo. SVN relies only on the central server to store all the versions of the project file
Push and pull operations are fast Push and pull operations are slower compared to Git
It belongs to 3rd generation Version Control Tool It belongs to 2nd generation Version Control tools
Client nodes can share the entire repositories on their local system Version history is stored on server-side repository
Commits can be done offline too Commits can be done only online
Work are shared automatically by commit Nothing is shared automatically

Describe branching strategies?

Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.

Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.

Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.

What are Pull requests?

Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements.
Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.

Linux

What is the default file permissions for the file and how can I modify it?

Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666

What is a  kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.


What is difference between grep -i and grep -v?

i ignore alphabet difference v accept this value
Example:  ls | grep -i docker
Dockerfile
docker.tar.gz
ls | grep -v docker
Desktop
Dockerfile
Documents
Downloads
You can’t see anything with name docker.tar.gz

How can you define particular space to the file?

This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1

What is concept of sudo in linux?

Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.

What are the checks to be done when a Linux build server become suddenly slow?

Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time

Jenkins

What is Jenkins?

Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team


What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool

What is continuous integration?

When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

Devops: Continuous Integration

Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.

The microservices architecture is a design approach to build a single application as a set of small services.

What are the advantages of Jenkins?

• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.

Which SCM tools does Jenkins supports?

Source code management tools supported by Jenkins are below:
• AccuRev
• CVS
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase
• RTC

I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs

In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.

How can I integrate all the tools with Jenkins?

I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.

How to install Jenkins via Docker?

The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:


• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard 
• Accessing the Jenkins/Blue Ocean Docker container:

docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:

docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash

Bash – Shell scripting

Write a shell script to add two numbers

echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”

How to get a file that consists of last 10 lines of the some other file?

Tail -10 filename >filename


How to check the exit status of the commands?

echo $?

How to get the information from file which consists of the word “GangBoard”?

grep “GangBoard” filename

How to search the files with the name of “GangBoard”?

find / -type f -name “*GangBoard*”

Write a shell script to print only prime numbers?

DevOps script to print prime numbers

How to pass the parameters to the script and how can I get those parameters?

Scriptname.sh parameter1 parameter2
Use  $* to get the parameters.

Monitoring – Refactoring

My application is not coming up for some reason? How can you bring it up?

We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.

What is multifactor authentication? What is the use of it?

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.

• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform

I want to copy the artifacts from one location to another location in cloud. How?

Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.

How to  delete 10 days older log files?

find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null

Ansible

What are the Advantages of Ansible?

• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural

What’s the use of Ansible?

Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.

What are the Pros and Cons of Ansible?

Pros:
1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
Cons:
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage

What is the difference among chef, puppet and ansible?

Ansible Supports Windows but server should be Linux/Unix YAML (Python) Single Active Node
Chef Puppet
Interoperability Works Only on Linux/Unix Works Only on Linux/Unix
Configuration Laguage Uses Ruby Pupper DSL
Availability Primary Server and Backup Server Multi Master Architecture

How to access variable names in Ansible?

Using hostvars method we can access and add the variables like below

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}

Docker

What is Docker?

Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.

What is Docker image?

Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.

What is a Docker Container?

Docker Container is the running instance of Docker Image

How to stop and restart the Docker container?

To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+

Cloud:
• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace

Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows

What are the tools used for docker networking?

For docker networking we generally use kubernets and docker swarm.

What is docker compose?

Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.

How to deploy docker container to aws?

Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.

What is the fundamental disservice of Docker holders?

As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.

What are the docker motor and docker form?

Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP

What are the Different modes does a holder can be run?

Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.

What the yield of docker assess order will be?

Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type

What is docker swarm?

Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.

What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?

Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.

How to Version control Docker pictures?

Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.

What is difference between docker image and docker container?

Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.

What is Application Containerization?

It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.

What is the syntax for building docker image?

docker build –f -t imagename:version

What is the running docker image?

docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version

How to log into a container?

docker exec –it /bin/bash

Git

What does the commit object contain?

Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).

Explain the difference between git pull and git fetch?

Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge

How do we know in Git if a branch has already been merged into master?

git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged

What is ‘Staging Area’ or ‘Index’ in GIT?

Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add

What is Git Stash?

Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.

What is Git stash drop?

Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log

What is the function of ‘git config’?

Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global user.name “Your Name”: This command will add your username.
git config –global user.email “Your E-mail Address”: This command will add your email id.

How can you create a repository in Git?

To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.

What language is used in Git?

Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.

What is SubGit?

SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.

How can you clone a Git repository via Jenkins?

First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.

What are the advantages of using Git?

1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.

What is git add?

It adds the file changes to the staging area

What is git commit? 

Commits the changes to the HEAD (staging area)

What is git push?

Sends the changes to the remote repository

What is git checkout?

Switch branch or restore working files

What is git branch?

Creates a branch

What is git fetch?

Fetch the latest history from the remote server and updates the local repo

What is git merge?

Joins two or more branches together

What is git pull?

Fetch from and integrate with another repository or a local branch (git fetch + git merge

What is git rebase?

Process of moving or combining a sequence of commits to a new base commit

What is git revert?

To revert a commit that has already been published and made public

What is git clone?

Clones the git repository and creates a working copy in the local machine

How can I modify the commit message in git?

I have to use following command and enter the required message.
Git commit –amend

How you handle the merge conflicts in git

Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.

What is Git command to send the modifications to the master branch of your remote repository

Use the command “git push origin master”

NOSQL

What are the benefits of NoSQL database on RDBMS?

Benefits:
1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected

Maven

What is Maven?

Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories

Numpy

What is Numpy

There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc. 

Why is python numpy better than lists?

Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.

Describe the map function in Python?

The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.

How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy

###

import numpy as np
a=np.random.rand(100)
print(type(a))
print(a)
###
 

will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1
python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1

How to count the occurrence of each value in a numpy array?

Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.

Ouput: [4 1 1 1 2 3 0 0 0 2]

Does Numpy Support Nan?

nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax
computers.

What does ravel() function in numpy do? 

It combines multiple numpy arrays into a single array

How to remove from one array those items that exist in another? 

>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])

How to reverse a numpy array in the most efficient way?

>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]

How to calculate percentiles when using numpy?

>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)

Output:  37.400000000000006

What Is The Difference Between Numpy And Scipy?

NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.

What Is The Preferred Way To Check For An Empty (zero Element) Array?

For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size

What Is The Difference Between Matrices And Arrays?

Matrices can only be two-dimensional, whereas arrays can have any number of
 dimensions

How can you find the indices of an array where a condition is true?

Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)

How to find the maximum and minimum value of a given flattened array?

>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)

Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis. 

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)

Find median of a numpy flattened array

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)

Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis

>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x

Calculate covariance matrix between two numpy arrays

>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)

Compute  product-moment correlation coefficients of two given numpy arrays

>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)

Develop a numpy program to compute the histogram of nums against the bins

>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)

Get the powers of an array values element-wise

>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)

Write a NumPy program to get true division of the element-wise array inputs

>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)

Panda

What is a series in pandas?

A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.

What features make Pandas such a reliable option to store tabular data?

Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.

What is re-indexing in pandas?

Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.

How will you create a series from dict in Pandas?

A Series is defined as a one-dimensional array that is capable of storing various data
types.

import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)

How can we create a copy of the series in Pandas?

Use pandas.Series.copy method
import pandas as pd
pd.Series.copy(deep=True)

 

What is groupby in Pandas?

GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and
easy.

What is vectorization in Pandas?

Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.

Different types of Data Structures in Pandas

Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.

What Is Time Series In pandas

A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.

How to convert pandas dataframe to numpy array?

The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.

Write a Pandas program to get the first 5 rows of a given DataFrame

>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]

Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data. 

>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])

Write a Python program to convert a Panda module Series to Python list and it’s type. 

>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())

Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2

Develop a Pandas program to compare the elements of the two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2

Develop a Pandas program to change the data type of given a column or a Series.

>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2

Write a Pandas program to convert Series of lists to one Series

>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)

Write a Pandas program to create a subset of a given series based on value and condition

>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s

Develop a Pandas code to alter the order of index in a given series

>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])

Write a Pandas code to get the items of a given series not present in another given series.

>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result

What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?

First one is a view of the original dataframe and second one is a copy of the original dataframe.

Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.

>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’

Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.

>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)

How will you add a column to a pandas DataFrame?

# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])

How to iterate over a Pandas DataFrame?

You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.

Python

What type of language is python? Programming or scripting?

Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.

Is python case sensitive?

Yes, python is a case sensitive language.

What is a lambda function in python?

An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.

What is the difference between xrange and xrange in python?

xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.

What are docstrings in python?

Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.

Whenever Python exits, why isn’t all the memory deallocated?

Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.

What does this mean: *args, **kwargs? And why would we use it?

We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.

What is the difference between deep and shallow copy?

Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.

Define encapsulation in Python?

Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.

Does python make use of access specifiers?

Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.

What are the generators in Python?

Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.

Write a Python script to Python to find palindrome of a sequence

a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
else:
print (“not palindrome”)

How will you remove the duplicate elements from the given list?

The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))

Does Python allow arguments Pass by Value or Pass by Reference?

Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.

What is slicing in Python?

Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.

Why is the “pass” keyword used in Python?

The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.

What are decorators in Python?

Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion

What is the key difference between lists and tuples in python?

The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.

What is self in Python?

Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.

What is PYTHONPATH in Python?

PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.

What is the difference between .py and .pyc files?

.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.

What is namespace in Python?

In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.

What is pickling and unpickling?

Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

How is Python interpreted?

Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.

Jupyter Notebook

What is the main use of a Jupyter notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.

How do I increase the cell width of the Jupyter/ipython notebook in my browser?

>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))

How do I convert an IPython Notebook into a Python file via command line?

>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb

How to measure execution time in a jupyter notebook?

>> %%time is inbuilt magic command

How to run a jupyter notebook from the command line?

>> jupyter nbconvert –to python nb.ipynb

How to make inline plots larger in jupyter notebooks?

Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)

How to display multiple images in a jupyter notebook?

>>for ima in images:
>>>plt.figure()
>>>plt.imshow(ima)

Why is the Jupyter notebook interactive code and data exploration friendly?

The ipywidgets package provides many common user interface controls for exploring code and data interactively.

What is the default formatting option in jupyter notebook?

Default formatting option is markdown

What are kernel wrappers in jupyter?

Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.

What are the advantages of custom magic commands?

Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.

Is the jupyter architecture language dependent?

No. It is language independent

Which tools allow jupyter notebooks to easily convert to pdf and html?

Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.

What is a major disadvantage of a Jupyter notebook?

It is very hard to run long asynchronous tasks. Less Secure.

In which domain is the jupyter notebook widely used?

It is mainly used for data analysis and machine learning related tasks.

What are alternatives to jupyter notebook?

PyCharm interact, VS Code Python Interactive etc.

Where can you make configuration changes to the jupyter notebook?

In the config file located at ~/.ipython/profile_default/ipython_config.py

Which magic command is used to run python code from jupyter notebook?

%run can execute python code from .py files

How to pass variables across the notebooks in Jupyter?

The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)

Export the contents of a cell/Show the contents of an external script

Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.

What inbuilt tool we use for debugging python code in a jupyter notebook?

Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.

How to make high resolution plots in a jupyter notebook?

>> %config InlineBackend.figure_format =’retina’

How can one use latex in a jupyter notebook?

When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.

What is a jupyter lab?

It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.

What is the biggest limitation for a Jupyter notebook?

Code versioning, management and debugging is not scalable in current jupyter notebook

Cloud Computing

[appbox googleplay com.cloudeducation.free]

[appbox appstore id1560083470-iphone screenshots]

Which are the different layers that define cloud architecture?

Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus

Explain Cloud Service Models?

Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)

What are Hybrid clouds?

Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.

Explain Platform as a Service (Paas)?

It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.

What is the difference in cloud computing and Mobile Cloud computing?

Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.

What are the security aspects provided with the cloud?

There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications

What are system integrators in cloud computing?

System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.

What is the usage of utility computing?

Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.

What are some large cloud providers and databases?

Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL

Explain the difference between cloud and traditional data centers.

In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.

What is hypervisor in Cloud Computing?

It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.

Define what MultiCloud is?

Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.

What is a multi-cloud strategy?

The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.

What is meant by Edge Computing, and how is it related to the cloud?

Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.

What are disadvantages of SaaS cloud computing layer

1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.

What is IaaS in Cloud Computing?

IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.

Explain what is the use of “EUCALYPTUS” in cloud computing?

EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.

Name the foremost refined and restrictive service model?

The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.

Name all the kind of virtualization that are also characteristics of cloud computing?

Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.

What Are Main Features Of Cloud Services?

Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades

What Are The Advantages Of Cloud Services?

Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.

Mention The Basic Components Of A Server Computer In Cloud Computing?

The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.

What are the advantages of auto-scaling?

Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management

[appbox googleplay com.cloudeducation.free]

[appbox appstore id1560083470-iphone screenshots]

Azure Cloud

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep
#Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure
ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648
android: https://play.google.com/store/apps/dev?id=4679760081477077763
windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0
web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)

Which Services Are Provided By Window Azure Operating System?

Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management

Which service in Azure is used to manage resources in Azure?

Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.

Which  web applications can be deployed with Azure?

Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.

What are Roles in Azure and why do we use them?

Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal.
There are 3 types of roles in Microsoft Azure:
● Web Role
● Worker Role
● VM Role
Let’s discuss each of these roles in detail:
Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications.
Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website.
VM Role – The VM role is used by a user to schedule tasks and other windows services.
This role can be used to customize the machines on which the web and worker role is running.

What is Azure as PaaS?

PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.

What are Break-fix issues in Microsoft Azure?

In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.

Explain Diagnostics in Windows Azure

Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in
Windows Azure as well as in the computer’s emulator for collecting data for a role instance.

State the difference between repetitive and minimal monitoring.

Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application.
On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.

What is the main difference between the repository and the powerhouse server?

The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.

Explain command task in Microsoft Azure

Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.

What is the difference between Azure Service Bus Queues and Storage Queues?

Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues.
Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services.
Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.

Explain Azure Service Fabric.

Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications.
The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.

Define the Azure Redis Cache.

Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.

How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?

TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.

What are the options to manage session state in Windows Azure?

● Windows Azure Caching
● SQL Azure
● Azure Table

What is cspack?

It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.

What is csrun?

It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.

How to design applications to handle connection failure in Windows Azure?

The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).

What is Windows Azure Diagnostics?

Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.

What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?

Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.

What is the use of Azure Active Directory?

Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network

Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?

This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.

What are virtual machine scale sets in Azure?

Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.

Are data disks supported within scale sets?

Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include:
● Azure files (SMB shared drives)
● OS drive
● Temp drive (local, not backed by Azure Storage)
● Azure data service (for example, Azure tables, Azure blobs)
● External data service (for example, remote database)

What is the difference between the Windows Azure Platform and Windows Azure?

The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.

What are the three main components of the Windows Azure Platform?

Compute, Storage and AppFabric.

Can you move a resource from one group to another?

Yes, you can. A resource can be moved among resource groups.

How many resource groups a subscription can have?

A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.

Explain the fault domain.

This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.

Differentiate between the repository and the powerhouse server?

Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.

Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

AWS Cloud

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Explain what S3 is?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”

What is AMI?

AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.

Mention what the relationship between an instance and AMI is?

From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

How many buckets can you create in AWS by default?

By default, you can create up to 100 buckets in each of your AWS accounts.

Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again

Explain what T2 instances is?

T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.

In VPC with private and public subnets, database servers should ideally be launched into which subnet?

With private and public subnets in VPC, database servers should ideally launch into private subnets.

Mention what the security best practices for Amazon EC2 are?

For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI

Is the property of broadcast or multicast supported by Amazon VPC?

No, currently Amazon VPI not provide support for broadcast or multicast.

How many Elastic IPs is allows you to create by AWS?

5 VPC Elastic IP addresses are allowed for each AWS account.

Explain default storage class in S3

The default storage class is a Standard frequently accessed.

What are the Roles in AWS?

Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.

What are the edge locations?

Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.

Explain snowball?

Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.

What is a redshift?

Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.

What is meant by subnet?

A large section of IP Address divided into chunks is known as subnets.

Can you establish a Peering connection to a VPC in a different region?

Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.

What is SQS?

Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

What is Amazon EMR?

EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.

What is boot time taken for the instance stored backed AMI?

The boot time for an Amazon instance store-backend AMI is less than 5 minutes.

Do you need an internet gateway to use peering connections?

Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.

How to connect an EBS volume to multiple instances?

We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.

What are the different types of Load Balancer in AWS services?

Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer

In which situation you will select provisioned IOPS over standard RDS storage?

You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.

What are the important features of Amazon cloud search?

Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice

What is AWS CDK?

AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
• Create and provision AWS infrastructure deployments predictably and repeatedly.
• Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling.
• Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
• Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.

What are best practices for controlling acccess to AWS CodeCommit?

– Create your own policy
– Provide temporary access credentials to access your repo
* Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod
* Federated access
* Multi-factor authentication

What is AWS CodeCobuild?

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.

AWS DevOps CodeBuild
DevOps How Does AWS CodeBuild Works

1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file.
2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build.
3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository.
4- Build artifacts produced are uploaded into an Amazon S3 bucket.
5- he build environment sends a notification about the build status.
6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.

What is AWS CodeDeploy?

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.

With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.

You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.

You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.

AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision

What is AWS CodeCommit?

AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.

What is AWS Opswork?

AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.

Chef and Puppet enable you to use code to automate your configurations.

AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.

AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.

AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep

Google Cloud Platform

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

What are the main advantages of using Google Cloud Platform?

Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
 information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
service
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform. 

Why should you opt to Google Cloud Hosting?

The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability

What are the libraries and tools for cloud storage on GCP?

At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
buckets
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage

There are many third party libraries and tools such as Boto Library.

What do you know about Google Compute Engine?

Google Cloud Engine is the basic component of the Google Cloud Platform. 
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.

How are the Google Compute Engine and Google App Engine related?

Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage
system.

How does the pricing model work in GCP cloud?

While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store.
The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network. 

What are the different methods for the authentication of Google Compute Engine API?

This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API:
– Using OAuth 2.0
– Through client library
– Directly with an access token

List some Database services by GCP.

There are many Google cloud database services which helps many enterprises to manage their data.
● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud.
● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity.
● Cloud Spanner
● Cloud Bigtable
● Firestore
● Firebase Realtime Database
● Memorystore
● Google Cloud Partner Services
● For more database products you can refer Google Cloud Databases
● For more data base solutions you can refer Google cloud Database solutions

What are the different Network services by GCP?

Google Cloud provides many Networking services and technologies that make easy to scale and manage your network.
● Hybrid connectivity helps to connect your infrastructure to Google Cloud
● Virtual Private Cloud (VPC) manage networking for your resources
● Cloud DNS is a highly available global domain naming system (DNS) network.
● Service Directory provides a service-centric network solution.
● Cloud Load Balancing
● Cloud CDN
● Cloud Armor
● Cloud NAT
● Network Telemetry
● VPC Service Controls
● Network Intelligence Center
● Network Service Tiers
● For more about Networking products refer Google Cloud Networking

List some Data Analytics service by GCP.

Google Cloud offers various Data Analytics services.
● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective.
● Looker
● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud.
● Dataflow
● Pub/Sub
● Cloud Data Fusion
● Data Catalog
● Cloud Composer
● Google Data Studio
● Dataprep
● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale.
● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services.
● For Google Cloud analytics services visit Data Analytics

Explain Google BigQuery in Google Cloud Platform

For traditional data warehouse, hardware setup replacement is required. In such case, Google
BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.

Explain Auto-scaling in Google cloud computing

Without human intervention, you can mechanically provision and initiate new instances in AWS.
Depending on various metrics and load, Auto-scaling is triggered.

Describe Hypervisor in Google Cloud Platform

Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.

Define VPC in the Google cloud platform

VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

References

Steve Nouri

https://www.edureka.co

https://www.kausalvikash.in

https://www.wisdomjobs.com

https://blog.edugrad.com

https://stackoverflow.com

http://www.ezdev.org

https://www.techbeamers.com

https://www.w3resource.com

https://www.javatpoint.com

https://analyticsindiamag.com

Online Interview Questions

https://www.geeksforgeeks.org

https://www.springpeople.com

https://atraininghub.com

https://www.interviewcake.com

https://www.techbeamers.com

https://www.tutorialspoint.com

programming with mosh.com

https://www.interviewbit.com

https://www.guru99.com

https://hub.packtpub.com

https://analyticsindiamag.com

https://www.dataquest.io

https://www.infoworld.com

 

 

DevOps  and SysOps Breaking News – Top Stories – Jobs

  • Any experience with OpenText (IIS CGI module) and Opentelemetry?
    by /u/ThingsOnTheMelon (Everything DevOps) on June 29, 2022 at 11:31 pm

    Hi all, Since working with OpenText it has always been a pain to monitor, now as we are assessing the viability of using Splunk Observability it is becoming a thorn again. Wanted to check in to see if anyone had any experience instrumenting OpenText Content Server with Otel or if they have had any experience with getting traces from requests going into a IIS CGI module. Our current tracing uses a proprietary agent which can see Web requests going into the executable with the CGI but can't seem to get the same visibility with Otel. I have run through all documentation on the subject, the only thing I can seem to get tracing is requests that hit the generic IIS page. Appreciate any insight or advise people might have. submitted by /u/ThingsOnTheMelon [link] [comments]

  • Appreciation - Complete Infrastructure/Server Room/Rack Redo
    by /u/dickielaw88 (Sysadmin) on June 29, 2022 at 10:58 pm

    Hello Everyone! Figured I would share what has been a very long 3 1/2 years of work completely redoing the company infrastructure/server room/etc. As we know in IT it is hard to get appreciation sometimes from those outside of the IT team, and until recently I have been a 1 man IT shop, so I have experienced quite a bit of this. The included link has some before and after pictures, and I would greatly appreciate everyone's thoughts as this took a lot of work and time! While it is not 100% done yet, it is nearly there. Some of the things that I have done: https://ibb.co/album/KcWzGS Complete rehab of the server room including new racks, flooring, cable management, new 30amp circuits for proper rack electrical, cable ladder for cable management, removal of old equipment, new wall mount rack for amplifiers, replaced 8+ individual UPS's with 1 managed UPS with extra battery banks for increased operating time of 45 minutes, etc. AND of course, safety lighting for the racks 😉 Rehab of a secondary server rack location to include a new enclosed rack, consolidating existing servers into the server room, and a new cage for IT storage. Complete redo of the network to include moving from 10+ switches through the company to a properly managed core network stack with 10gig stacking, and 1-3gig trunks to secondary switches. Also implemented VLANS across the company to segment the 5+ separate networks I created for specific use with color-coded cabling. Also implemented managed AP's throughout the entire building as they were previously relying on various consumer wireless routers all with different SSID's. Created a lifecycle management program as when I started most PC's were 7+ years old, and now almost all have been replaced and will continue to be replaced in the 3-5 year time frame. Moved the majority of the servers to virtual, as well as moved almost all servers to current and supported Windows Server versions. Implemented various solutions such as NG firewall, NG endpoint protection, security awareness training program company-wide, endpoint management & patch management with remote access to all corporate workstations, network monitoring, new cloud-based spam filter to get rid of the old SMG, new backup solution, new security camera system with video intercoms for gate/door access, implemented VOIP, and many more. submitted by /u/dickielaw88 [link] [comments]

  • What are the Top Free Dev Tenants you know about?
    by /u/Harvey_B1rdman (Sysadmin) on June 29, 2022 at 10:44 pm

    I was looking around setting up free dev tenants for services I use more often than not and want to configure as many of the advanced features as possible that typically require top end SKUs. A lot of solutions have free accounts you can setup but gate the advanced features behind their paywalls. Obviously, everything cost money to run so it's nice when they do give away some developer tenants with all features enabled but not expected to be the norm. Any event I wanted to see if anyone knew about more dev tenants than the ones I have listed that are not fully stripped-down version of the product. Or really any in general you find valuable I'm always looking to add to my fake organization to stay on top of integrations. Ones I use and are probably more known: M365 E5 https://developer.microsoft.com/en-us/microsoft-365/profile/ Salesforce https://developer.salesforce.com/signup ServiceNow https://developer.servicenow.com/ Okta https://developer.okta.com/ submitted by /u/Harvey_B1rdman [link] [comments]

  • Another Day, Another Jira Vulnerability to patch...
    by /u/feint_of_heart (Sysadmin) on June 29, 2022 at 10:34 pm

    You know the drill, get patching submitted by /u/feint_of_heart [link] [comments]

  • What path should I take to get back again to IT
    by /u/absint0o (Everything DevOps) on June 29, 2022 at 10:33 pm

    Hi there all, So I’m 36 years old and I live in Portugal, I’ve started my professional career as a Programmer in a consultant company, I’ve mainly used Java, JavaScript and also Oracles’ PL-SQL as my main programming languages. Next I went to a startup and worked only with Ruby on rails. The. I started my own company and I stopped not only working with IT but also I’ve stopped programming. I got even far away from IT as I now work as a Facility Manager. I really miss IT job but I don’t think I what to get back to “hard-core” programming again. I really enjoyed Devops as I worked for some time with GitLab, Jenkins, Maven. I don’t know if it’s a good “restart” for entering IT again and what path (like certifications and courses) do you advise to get a DevOps job? What do you think about DevOps udacity nano-degree? Do I need it? YouTube is suficient? I would much appreciate some guidance and experience sharing. Thank you very much for you help! submitted by /u/absint0o [link] [comments]

  • Is there any way to automate highly configurable deployments?
    by /u/GGxSam (Everything DevOps) on June 29, 2022 at 10:28 pm

    So we use a specialized ETL tool to move data from one platform to another. As part of our very manual deployment process, we usually have to create a bunch of config files that are almost always specific to the project, or append to existing ones. Therefore, every project has its own set of config steps. This seems impossible to automate, so I was wondering if anyone has run into a similar scenario? submitted by /u/GGxSam [link] [comments]

  • Exchange LDAP bind locking out admin account several times a second?
    by /u/Pinball_Newf (Sysadmin) on June 29, 2022 at 10:23 pm

    We’ve a situation where the domain admin password was changed and now the exchange servers are locking out the account many times a second. Was traced to the exchange store service making LDAP binds to DCs constantly. (Plain LDAP as well, old password in clear text on packet capture). Anyone have any idea why exchange is doing this? Is 2016, latest CU - hybrid environment. We’ve a ticket open with MS for weeks now but no resolution…. submitted by /u/Pinball_Newf [link] [comments]

  • Does anyone know Google Chrome for Business Support Contact Number?
    by /u/flayofish (Sysadmin) on June 29, 2022 at 10:02 pm

    We have a Google Chrome for Business Support contract. We were forced to create a Google workspace account to get support for our clients. We are an Enterprise Windows shop with Active Directory. We manage Google Chrome for Business browser through GPO extensions provided by Google. This is the first time I've reached out for support and they told me, "I have never heard of such a contract. You have to have users onboarded into your Google account workspace." Apparently, they no longer have managers on duty that can help, so I'm asking you guys. Does anyone know a support number to people that can help? Has anyone ran into an issue where you remove a forced installed extension from GPO and it remains persistent? Thanks! submitted by /u/flayofish [link] [comments]

  • MeshCentral and Intel AMT
    by /u/zm1868179 (Sysadmin) on June 29, 2022 at 9:52 pm

    Just thought I would bring this up for others that might be looking for a remote control solution. I tried to setup Intel AMT in the past and it was a giant PIA to configure but I recently spun up an Azure VM with a public IP locked down SSH access to allowed IPs only and installed mesh Central on it set it up for Azure Authentication, and deploy the Agent to a few PCs and it auto configured and setup my PCs with Intel AMT in ACM mode now I have full remote access to our PCs when we need it. It was super easy and can't praise it enough. submitted by /u/zm1868179 [link] [comments]

  • Sanity check BitTitan office 365 migration
    by /u/djbomberto (Sysadmin) on June 29, 2022 at 9:19 pm

    Hello all, ​ Need a sanity check here. ​ Your migration failed while checking destination credentials. The SMTP address has no mailbox associated with it. Error: ErrorNonExistentMailbox ​ Source has said mailbox and is licensed. Target does not have said mailbox license nor in O365 In order to migrate said box over it needs Mailbox created on target license assigned to mailbox ​ ​ well it brings me to my second question, I applied a E5 and I still get the same error about that mailbox... ​ any suggestions? submitted by /u/djbomberto [link] [comments]

  • Data Engineer vs DevOps
    by /u/RP_m_13 (Everything DevOps) on June 29, 2022 at 9:19 pm

    What would you choose between both, if you are equally interested in both. Because I can’t make my decision about it. As I have started as DE, but I had to get into DevOps a little and I like it also submitted by /u/RP_m_13 [link] [comments]

  • Writting to WMI
    by /u/Sysadmin_in_the_Sun (Sysadmin) on June 29, 2022 at 9:19 pm

    submitted by /u/Sysadmin_in_the_Sun [link] [comments]

  • When people ask what your job entails, what do you say?
    by /u/babyunvamp (Sysadmin) on June 29, 2022 at 9:04 pm

    I changed careers to IT from aviation a few years ago and I still run into people that didn't realize and they ask me what I do now. When I was help desk it was easy, but as my role gets more complex I have no idea how to explain it. Most people don't even know what a server is. "I deploy, configure, and troubleshoot virtual servers in a hybrid environment" For what it's worth I'm in a large org with 200+ in the IT department, pretty much every task has its own team. So, what's your go-to explanation for small talk with non-IT folks? submitted by /u/babyunvamp [link] [comments]

  • SQL Database on a VM - Queries
    by /u/andallread1 (Sysadmin) on June 29, 2022 at 8:59 pm

    Hiya, I've inherited a small setup and just want to check a couple of things in regards to an older SQL Database. It all works and is only used by about 2 or 3 people, but it is considered a key database. Currently the VHD is backed up to the cloud nightly, so the whole VHD is restorable as opposed to a backup of the database itself. Obviously it would be much better to backup the SQL Database directly - but considering it's inherited, how problematic is this? If that database was to corrupt, I could restore the VHD from the previous day containing the database? I'm just wondering how urgently we need to look into a direct SQL backup. We want to move the VM it sits on to an alternative server. Are we safe to shut the VM down and move the VHD. Or is that a no-no with SQL? I am meeting with the senior engineer to discuss, and obviously he will have his thoughts, but I want to have done my own research prior. Thanks submitted by /u/andallread1 [link] [comments]

  • MS licensing for non running virtual machines
    by /u/Few-Commercial-9869 (Sysadmin) on June 29, 2022 at 7:51 pm

    Hi! I am struggling to find information related to licensing for non running virtual machines. The question is mainly regarding Microsoft server and SQL licensing. The licenses are licensed under MPSA with active SA in case this matters. We have some VMware hosts with quite a lot of non running VMs. They fall into three categories: - Old systems that have been replaced, but for various reasons are still stored ready in the hypervisor in case we need to dig for information in some old documentation or find an old-old invoice from one of our previous purchasing systems. - Backup VMs for critical VMs like domain controllers normally running on a different VM host - Old VMs that nobody knows what does and we are afraid to delete in case there is something important on them Do I need to license these the same way I do for the running VMs ? Do I only need to have licenses enough to cover the maximum amount of concurrent powered on VMs at any given time ? submitted by /u/Few-Commercial-9869 [link] [comments]

  • Join our open source coguard-cli project for Docker image auto discovery and security
    by /u/mazz_42 (Everything DevOps) on June 29, 2022 at 7:49 pm

    We discovered that many tools are not catching common vulnerabilities, so we developed CoGuard. From there, we discovered that many people did not know a) if they had a specific configuration file or not and b) where to find it . So we realized pretty quickly, that we needed to back peddle from our big dream and work on a smaller, open source project that focuses on auto discovery. We chose Docker images as a starting point, because our software has uncovered a vast amount of vulnerabilities in out of the box Docker images-showing us that this area is largely underserved by current tools. Why open source? Because we want to build a community and build many more products with community input that help us all build stronger, more stable infrastructures. Too often we are fighting fires, that we don't even have the time to build new solutions or innovate-so we have gotten this project started and are looking for others to join in and contribute! https://github.com/coguardio/coguard-cli submitted by /u/mazz_42 [link] [comments]

  • Why do medical software companies have little concern for HIPPA regulations?
    by /u/LigerXT5 (Sysadmin) on June 29, 2022 at 7:44 pm

    Edit: As two others have mentioned, I typoed with HIPPA instead of HIPAA. Yes I'll remember that, looking to also implement having my browser underline it to remind me just in case, lol. This may just be a stupid little rant, from a small town Sysadmin, in a very rural part of the US (Oklahoma), but the number of times a year, I hear about X feature or Y program will not work, because it needs Admin, local or domain, or a network share needs access that does not require login, is baffling. Had to document the need recently, a medical client company, needing their software to access the server network share, with no login. The company understands this technically breaks HIPAA regulations, but they need the software running, and the software support says this is per their software requirements, which if I may add, wasn't an issue until recent. I've seen it before, where older TLS versions are needed, and be accepted, and unfiltered through firewalls. We're told to allow all access to the server for software to communicate, but the limited info on ports isn't enough, just open it (which we fought, traced, and corrected ourselves). Then there's support companies that need remote access to a server at a moments notice, admin rights, and don't need to be scheduling ahead or waiting for a tech to grant access. Where's the logging and documentation? Where's the tracking of events to point to, not just the cause, but the blame? I wish I could just go back to house (and small businesses too) call support, but that doesn't pay enough for a living now-a-days, lol. So many small businesses, which I completely understand, can't afford to have one or two IT people sitting around, for the occasional moments something needs done (including routine maintenance), which is where my office comes in (at least those who want a management agreement, otherwise hourly charged). submitted by /u/LigerXT5 [link] [comments]

  • What is the title for someone who evaluates an environment (like server room) and makes recommendations?
    by /u/WeirdWebDev (Sysadmin) on June 29, 2022 at 7:40 pm

    Hi all, I think my company might be getting too big for its own good. I'm mostly "just" a programmer, and honestly that's ALL I want to be, but since it's a very small company (and I'm the only IT guy in any capacity) I've been managing the hardware (with the assistance of RackSpace) and every other aspect of our systems. In a vague nutshell, we have a web app that clients use, and it pushes data out to 3rd parties they are signed up for. We also have a component that allows client software to push data to us, that we in turn push out. Our environment, all at RackSpace, consists of a Firewall/Router, web server, and database server. Now to my question: Being that I am "just" a programmer, and we're starting to run into performance issues, I am not sure who to turn to in order to find out what our bottleneck is... I believe I have refactored the code all it can be, so I think we're just getting to a point where we have as many clients as our infrastructure can handle, and that is something I just don't know much about. I imagine there are companies, or individuals, who are for hire to evaluate things like this? Where would I begin looking? submitted by /u/WeirdWebDev [link] [comments]

  • New to CI/CD, where to host my API?
    by /u/offshoes (Everything DevOps) on June 29, 2022 at 7:39 pm

    Hello, I've created a pretty simple API with Java + Spring Boot. I'd like to set up some CI/CD and host this API. So far, I've set up GitHub actions to do the "CI" part, meaning it builds the application with maven, runs all the tests, and I've also set it up to build a Docker image and push it to my Docker Hub. It does this all when I push to master branch or create a PR to master. Now, I'd like to explore some options to actually host this application somewhere through some cloud service, but don't know where to start. Should I keep going with hosting the docker image? What platform should I use? I bought a domain name that I'd like the API to map to, and would also like to have something fairly inexpensive because this is just for some personal projects and practice. submitted by /u/offshoes [link] [comments]

  • Excessive MFA?
    by /u/kristianwindsor (Sysadmin) on June 29, 2022 at 6:50 pm

    We have MFA setup for our project tracking tool (like Jira) and for the VPN that's on all employee's laptops. Each of these require MFA every 24 hours. A lot of employees are complaining because it disrupts their workflow, requires them to have their phone on them, and is just slow and annoying. Maybe I'm wrong, but isn't this just excessive? We're not dealing with anything crazy like banking software here, it's just a project management tool with stories and bugs and the VPN gets you access to our dev system. Or is there anything we can do to make this experience less painful? submitted by /u/kristianwindsor [link] [comments]

Top 200 AWS Certified Associate SysOps Administrator Practice Quiz – Questions and Answers Dumps

AWS Certified Security – Specialty Questions and Answers Dumps

What is the AWS Certified SysOps Administrator – Associate?

The AWS Certified SysOps Administrator – Associate (SOA-C01) examination is intended for individuals who have technical expertise in deployment, management, and operations on AWS.

The AWS Certified SysOps Administrator – Associate exam covers the following domains:

Domain 1: Monitoring and Reporting 22%

Domain 2: High Availability 8%

2022 AWS Cloud Practitioner Exam Preparation

Domain 3: Deployment and Provisioning 14%

Domain 4: Storage and Data Management 12%

Domain 5: Security and Compliance 18%

Domain 6: Networking 14%

Domain 7: Automation and Optimization 12%

AWS Certified SysOps Administrator
AWS Certified SysOps Administrator

Top 200 Top 20 AWS Certified Associate SysOps Administrator  Practice Quiz Questions and Answers and References – SOA-C01:

Download Full PDF here

AWS Certified SysOps Administrator – Associate Study guide and Practice Exam
AWS Certified SysOps Administrator – Associate Study guide and Practice Exam


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

Question 1: Under which security model does AWS provide secure infrastructure and services, while the customer is responsible for secure operating systems, platforms, and data?


ANSWER1:

C

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT1: The Shared Responsibility Model is the security model under which AWS provides secure infrastructure and services, while the customer is responsible for secure operating systems, platforms, and data.

Question 2: Which type of testing method is used to compare a control system to a test system, with the goal of assessing whether changes applied to the test system improve a particular metric compared to the control system?


Build the skills that'll drive your salary into six figures

ANSWER2:

A

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT2: The side-by-side testing method is used to compare a control system to a test system, with the goal of assessing whether changes applied to the test system improve a particular metric compared to the control system.

Reference2: AWS Side by side testing 

Question 3: When BGP is used with a hardware VPN, the IPSec and the BGP connections must both be which of the following on the same user gateway device?



ANSWER3:

B

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT3: The IPSec and the BGP connections must both be terminated on the same user gateway device.

Reference3: IpSec and BGP in AWS

Question 4: Which pillar of the AWS Well-Architected Framework includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies?

ANSWER4:

D

Get mobile friendly version of the quiz @ the App Store


NOTES/HINT4: Security is the pillar of the AWS Well-Architected Framework that includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reference4: AWS Well-Architected Framework: Security

Question 5: Within the realm of Amazon S3 backups, snapshots are which of the following?

ANSWER5:

A

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT: Within the realm of Amazon S3 backups, snapshots are block-based.

Reference5: Snapshots are block based


Question 6: Amazon VPC provides the option of creating a hardware VPN connection between remote customer networks and their Amazon VPC over the Internet using which encryption technology?


ANSWER6:

E

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT6: Amazon VPC provides the option of creating a hardware VPN connection between remote customer networks and their Amazon VPC over the Internet using IPsec encryption technology.

Reference6: Amazon VPC IPSec Encryption

Question 7: To make a clean backup of a database, that database should be put into what mode before making a snapshot of it?

ANSWER7:

C

Get mobile friendly version of the quiz @ the App Store


NOTES/HINT7: To make a clean backup of a database, that database should be put into hot backup mode before making a snapshot of it.

Reference: AWS Prescriptive Backup Recovery Guide

Question 8: Which pillar of the AWS Well-Architected Framework includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve?


ANSWER8:

B

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT8: Performance efficiency is the pillar of the AWS Well-Architected Framework that includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Reference8: Performance Efficiency Pillar – AWS Well-Architected Framework

Question 9: AWS Storage Gateway supports which three configurations?

ANSWER9:

C

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT9: AWS Storage Gateway supports Gateway-stored volumes, Gateway-cached volumes, and Gateway-virtual tape library.

Reference9: AWS Storage Gateway configurations

Question 10: With which of the following can you establish private connectivity between AWS and a data center, office, or co-location environment?

ANSWER10:

B

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT10: With AWS Direct Connect you can establish private connectivity between AWS and a data center, office, or co-location environment.

Reference: AWS Direct Connect

Question 11: A company is migrating a legacy web application from a single server to multiple Amazon EC2 instances behind an Application Load Balancer (ALB). After the migration, users report that they are frequently losing their sessions and are being prompted to log in again. Which action should be taken to resolve the issue reported by users?

A) Purchase Reserved Instances.
B) Submit a request for a Spot block.
C) Submit a request for all Spot Instances.
D) Use a mixture of On-Demand and Spot Instances

ANSWER11:

D

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT11: Legacy applications designed to run on a single server frequently store session data locally. When these applications are deployed on multiple instances behind a load balancer, user requests are routed to instances using the round robin routing algorithm. Session data stored on one instance would not be present on the others. By enabling sticky sessions, cookies are used to track user requests and keep subsequent requests going to the same instance.

Reference 11: Sticky Sessions

Question 12: An ecommerce company wants to lower costs on its nightly jobs that aggregate the current day’s sales and store the results in Amazon S3. The jobs run on multiple On-Demand Instances, and the jobs take just under 2 hours to complete. The jobs can run at any time during the night. If the job fails for any reason, it needs to be started from the beginning. Which solution is the MOST cost-effective based on these requirements?

A) Purchase Reserved Instances.

B) Submit a request for a Spot block.

C) Submit a request for all Spot Instances.

D) Use a mixture of On-Demand and Spot Instances.

ANSWER12:

B

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT12: The solution will take advantage of Spot pricing, but by using a Spot block instead of Spot Instances, the company can be assured the job will not be interrupted.

Reference12: Spot Block

Question 13: A sysops team checks their AWS Personal Health Dashboard every week for upcoming AWS hardware maintenance events. Recently, a team member was on vacation and the team missed an event, which resulted in an outage. The team wants a simple method to ensure that everyone is aware of upcoming events without depending on an individual team member checking the dashboard. What should be done to address this?

A) Build a web scraper to monitor the Personal Health Dashboard. When new health events are detected, send a notification to an Amazon SNS topic monitored by the entire team.

B) Create an Amazon CloudWatch Events event based off the AWS Health service and send a notification to an Amazon SNS topic monitored by the entire team.

C) Create an Amazon CloudWatch Events event that sends a notification to an Amazon SNS topic monitored by the entire team to remind the team to view the maintenance events on the Personal Health Dashboard.

D) Create an AWS Lambda function that continuously pings all EC2 instances to confirm their health. Alert the team if this check fails.

ANSWER13:

B

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT13: The AWS Health service publishes Amazon CloudWatch Events. CloudWatch Events can trigger Amazon SNS notifications. This method requires neither additional coding nor infrastructure. It automatically notifies the team of upcoming events, and does not depend upon brittle solutions like web scraping.

Reference 13: Amazon CloudWatch Events

Question14: An application running in a VPC needs to access instances owned by a different account and running in a VPC in a different AWS Region. For compliance purposes, the traffic must not traverse the public internet.
How should a sysops administrator configure network routing to meet these requirements?

A) Within each account, create a custom routing table containing routes that point to the other account’s virtual private gateway.

B) Within each account, set up a NAT gateway in a public subnet in its respective VPC. Then, using the public IP address from the NAT gateway, enable routing between the two VPCs.

C) From one account, configure a Site-to-Site VPN connection between the VPCs. Within each account, add routes in the VPC route tables that point to the CIDR block of the remote VPC.

D) From one account, create a VPC peering request. After an administrator from the other account accepts the request, add routes in the route tables for each VPC that point to the CIDR block of the peered VPC.

ANSWER14:

D

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT14: A VPC peering connection enables routing using each VPC’s private IP addresses as if they were in the same network. Traffic using inter-Region VPC peering always stays on the global AWS backbone and never traverses the public internet.

Reference14: VPC Peering

Question15: An application running on Amazon EC2 instances needs to access data stored in an Amazon DynamoDB table.

Which solution will grant the application access to the table in the MOST secure manner?

A) Create an IAM group for the application and attach a permissions policy with the necessary privileges. Add the EC2 instances to the IAM group.

B) Create an IAM resource policy for the DynamoDB table that grants the necessary permissions to Amazon EC2.

C) Create an IAM role with the necessary privileges to access the DynamoDB table. Associate the role with the EC2 instances.

D) Create an IAM user for the application and attach a permissions policy with the necessary privileges. Generate an access key and embed the key in the application code.

ANSWER15:

C

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT15: An IAM role can be used to provide permissions for applications that are running on Amazon EC2 instances
to make AWS API requests using temporary credentials.

Reference15: IAM Role

Question16: A third-party service uploads objects to Amazon S3 every night. Occasionally, the service uploads an incorrectly formatted version of an object. In these cases, the sysops administrator needs to recover an older version of the object.
What is the MOST efficient way to recover the object without having to retrieve it from the remote service?

A) Configure an Amazon CloudWatch Events scheduled event that triggers an AWS Lambda function that backs up the S3 bucket prior to the nightly job. When bad objects are discovered, restore the backed up version.

B) Create an S3 event on object creation that copies the object to an Amazon Elasticsearch Service (Amazon ES) cluster. When bad objects are discovered, retrieve the previous version from Amazon ES.

C) Create an AWS Lambda function that copies the object to an S3 bucket owned by a different account. Trigger the function when new objects are created in Amazon S3. When bad objects are discovered, retrieve the previous version from the other account.

D) Enable versioning on the S3 bucket. When bad objects are discovered, access previous versions with the AWS CLI or AWS Management Console.

ANSWER16:

D

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT16: Enabling versioning is a simple solution; (A) involves writing custom code, (C) has no versioning, so the replication will overwrite the old version with the bad version if the error is not discovered quickly, and (B) will involve expensive storage that is not well suited for objects.

Reference16: Versioning

Question17: According to the AWS shared responsibility model, for which of the following Amazon EC2 activities is AWS responsible? (Select TWO.)
A) Configuring network ACLs
B) Maintaining network infrastructure
C) Monitoring memory utilization
D) Patching the guest operating system
E) Patching the hypervisor

ANSWER17:

D and E

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT17: AWS provides security of the cloud, including maintenance of the hardware and hypervisor software supporting Amazon EC2. Customers are responsible for any maintenance or monitoring within an EC2 instance, and for configuring their VPC infrastructure.

Reference17: Security of the cloud

Question18: A security and compliance team requires that all Amazon EC2 workloads use approved Amazon Machine Images (AMIs). A sysops administrator must implement a process to find EC2 instances launched from unapproved AMIs.

Which solution will meet these requirements?
A) Create a custom report using AWS Systems Manager inventory to identify unapproved AMIs.
B) Run Amazon Inspector on each EC2 instance and flag the instance if it is using unapproved AMIs.
C) Use an AWS Config rule to identify unapproved AMIs.
D) Use AWS Trusted Advisor to identify the EC2 workloads using unapproved AMIs.

ANSWER18:

C

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT18: AWS Config has a managed rule that handles this scenario.

Reference18: Managed Rule

Question19: A sysops administrator observes a large number of rogue HTTP requests on an Application Load Balancer. The requests originate from various IP addresses. These requests cause increased server load and costs.

What should the administrator do to block this traffic?
A) Install Amazon Inspector on Amazon EC2 instances to block the traffic.
B) Use Amazon GuardDuty to protect the web servers from bots and scrapers.
C) Use AWS Lambda to analyze the web server logs, detect bot traffic, and block the IP addresses in the security groups.
D) Use an AWS WAF rate-based rule to block the traffic when it exceeds a threshold.

ANSWER19:

D

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT19: AWS WAF has rules that can protect web applications from HTTP flood attacks.

Reference19: HTTP Flood

Question20: A sysops administrator is implementing security group policies for a web application running on AWS.

An Elastic Load Balancer connects to a fleet of Amazon EC2 instances that connect to an Amazon RDS database over port 1521. The security groups are named elbSG, ec2SG, and rdsSG, respectively.
How should these security groups be implemented?
A) elbSG: allow port 80 and 443 from 0.0.0.0/0;
ec2SG: allow port 443 from elbSG;
rdsSG: allow port 1521 from ec2SG.

B) elbSG: allow port 80 and 443 from 0.0.0.0/0;
ec2SG: allow port 80 and 443 from elbSG and rdsSG;
rdsSG: allow port 1521 from ec2SG.

C) elbSG: allow port 80 and 443 from ec2SG;
ec2SG: allow port 80 and 443 from elbSG and rdsSG;
rdsSG: allow port 1521 from ec2SG.

D) elbSG: allow port 80 and 443 from ec2SG;
ec2SG: allow port 443 from elbSG;
rdsSG: allow port 1521 from elbSG.

ANSWER20: 

A

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT20: elbSG must allow all web traffic (HTTP and HTTPS) from the internet. ec2SG must allow traffic from the load balancer only, in this case identified as traffic from elbSG. The database must allow traffic from the EC2 instances only, in this case identified as traffic from ec2SG.

Reference20: Allow all traffic

Question21: You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access from the offending IP address block be denied tor the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from
the specified IP address block.

A) Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block
B) Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block
C) Add a rule to all of the VPC 5 Security Groups to deny access from the IP address block
D) Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block

ANSWER21:


C

NOTES22: Add a rule to all of the VPC 5 Security Groups to deny access from the IP address bloc

Reference22: VPC

Question 22: When preparing for a compliance assessment of your system built inside of AWS. what are three best-practices for you to prepare for an audit? Choose 3 answers

A) Gather evidence of your IT operational controls
B) Request and obtain applicable third-party audited AWS compliance reports and certifications
C) Request and obtain a compliance and security tour of an AWS data center for a pre-assessment security review
D) Request and obtain approval from AWS to perform relevant network scans and in-depth penetration tests of your system’s Instances and endpoint
E) Schedule meetings with AWS’s third-party auditors to provide evidence of AWS compliance that maps to your control objectives

ANSWER22:


B, D, E

NOTES22: AWS Security

Reference22: AWS Audit Manager

Question23: You have started a new job and are reviewing your company’s infrastructure on AWS You notice one web application where they have an Elastic Load Balancer (&B) in front of web instances in an Auto Scaling Group When you check the metrics for the ELB in CloudWatch you see four healthy instances In Availability Zone (AZ) A and zero in AZ B There are zero unhealthy instances.
What do you need to fix to balance the instances across AZs?

A) Set the ELB to only be attached to another AZ
B) Make sure Auto Scaling is configured to launch in both AZs
C) Make sure your AMI is available in both AZs
D) Make sure the maximum size of the Auto Scaling Group is greater than 4

ANSWER23:


B

NOTES23: AZs

Reference23: AZs

Question24: You have been asked to leverage Amazon VPC BC2 and SOS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure your application has sufficient bandwidth between your EC2 instances and SQS.
Which option will provide (he most scalable solution for communicating between the application and SOS?

A) Ensure the application instances are properly configured with an Elastic Load Balancer
B) Ensure the application instances are launched in private subnets with the EBS-optimized option enabled
C) Ensure the application instances are launched in public subnets with the associate-publicIP-address=true option enabled
D) Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SOS queue size

ANSWER24:


C

NOTES24: SQS

Reference24: SQS

Question25: You have identified network throughput as a bottleneck on your ml small EC2 instance when uploading data Into Amazon S3 In the same region. How do you remedy this situation?

A) Add an additional ENI
B) Change to a larger Instance
C) Use DirectConnect between EC2 and S3
D) Use EBS PIOPS on the local volume

ANSWER25:


B

NOTES25: EC2 instances

Reference25: EC2 Best Practices

Question 26: When attached to an Amazon VPC which two components provide connectivity with external networks? Choose 2 answers

A) Elastic IPS (EIP)
B) NAT Gateway (NAT)
C) Internet Gateway {IGW)
D) Virtual Private Gateway (VGW)

ANSWER26:

C. D.

NOTES26: IGW and VGW
Reference26: IGW – VGW

Question 27: Your application currently leverages AWS Auto Scaling to grow and shrink as load Increases’ decreases and has been performing well Your marketing team expects a steady ramp up in traffic to follow an upcoming campaign that will result in a 20x growth in traffic over 4 weeks Your forecast for the approximate number of Amazon EC2 instances necessary to meet the peak demand is 175. What should you do to avoid potential service disruptions during the ramp up in traffic?

A) Ensure that you have pre-allocated 175 Elastic IP addresses so that each server will be able to obtain one as it launches
B) Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits
C) Change your Auto Scaling configuration to set a desired capacity of 175 prior to the launch of the marketing campaign
D) Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign
ANSWER: 

D.

NOTES: Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign
Reference: AWS Auto Scaling

Question 28: You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling group are being marked unhealthy due to an ELB health check, but these unhealthy instances are not being terminated. What do you need to do to ensure trial instances marked unhealthy by the ELB will be terminated and replaced?

A) Change the thresholds set on the Auto Scaling group health check
B) Add an Elastic Load Balancing health check to your Auto Scaling group
C) Increase the value for the Health check interval set on the Elastic Load Balancer
D) Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks
ANSWER: 

B.

NOTES: Add an Elastic Load Balancing Health Check to your Auto Scaling GroupBy default, an Auto Scaling group periodically reviews the results of EC2 instance status to determine the health state of each instance. However, if you have associated your Auto Scaling group with an Elastic Load Balancing load balancer, you can choose to use the Elastic Load Balancing health check. In this case, Auto Scaling determines the health status of your instances by checking the results of both the EC2 instance status check and the Elastic Load Balancing instance health check.
Reference:  AWS ELB

Question 29: Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers

A) Amazon S3
B) Amazon RDS
C) Amazon EBS
D) Amazon Redshift
ANSWER:

C. D.

NOTES: EBS and Redshift
Reference: EBS and Redshift
ReferenceUrl: EBS and Redshift

Question 30: An organization has configured a VPC with an Internet Gateway (IGW). pairs of public and private subnets (each with one subnet per Availability Zone), and an Elastic Load Balancer (ELB) configured to use the public subnets The application s web tier leverages the ELB. Auto Scaling and a mum-AZ RDS database instance The organization would like to eliminate any potential single points of failure in this design. What step should you take to achieve this organization’s objective?

A) Nothing, there are no single points of failure in this architecture.
B) Create and attach a second IGW to provide redundant internet connectivity.
C) Create and configure a second Elastic Load Balancer to provide a redundant load balancer.
D) Create a second multi-AZ RDS instance in another Availability Zone and configure replication to provide a redundant database.

ANSWER

C.

NOTES: Create and configure a second Elastic Load Balancer to provide a redundant load balancer.

Reference: ELB

Question 31: Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers

A) Each subnet maps to a single Availability Zone
B) A CIDR block mask of /25 is the smallest range supported
C) Instances in a private subnet can communicate with the internet only if they have an Elastic IP.
D) By default, all subnets can route between each other, whether they are private or public
E) V Each subnet spans at least 2 Availability zones to provide a high-availability environment
ANSWER

C. E.

NOTES: VPC
Reference: VPC

Question 32: You are creating an Auto Scaling group whose Instances need to insert a custom metric into CloudWatch. Which method would be the best way to authenticate your CloudWatch PUT request?

A) Create an IAM role with the Put MetricData permission and modify the Auto Scaling launch configuration to launch instances in that role
B) Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the userscredentials into the instance User Data
C) Modify the appropriate Cloud Watch metric policies to allow the Put MetricData permission to instances from the Auto Scaling group
D) Create an IAM user with the PutMetricData permission and put the credentials in a private repository and have applications on the server pull the credentials as needed

ANSWER: 

B.

NOTES: Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the userscredentials into the instance User Data

Reference: IAM

Question 33: When an EC2 instance that is backed by an S3-based AMI Is terminated, what happens to the data on me root volume?

A) Data is automatically saved as an E8S volume.
B) Data is automatically saved as an ESS snapshot.
C) Data is automatically deleted.
D) Data is unavailable until the instance is restarted.
ANSWER: 

D.

NOTES: Data is unavailable until the instance is restarted.
Reference: AWS EC2
ReferenceUrl: AWS EC2 S3-based AMI

Question 34: You have a web application leveraging an Elastic Load Balancer (ELB) In front of the web servers deployed using an Auto Scaling Group Your database is running on Relational Database Service (RDS) The application serves out technical articles and responses to them in general there are more views of an article than there are responses to the article. On occasion, an article on the site becomes extremely popular resulting in significant traffic Increases that causes the site to go down. What could you do to help alleviate the pressure on the infrastructure while maintaining availability during these events? Choose 3 answers

A) Leverage CloudFront for the delivery of the articles.
B) Add RDS read-replicas for the read traffic going to your relational database
C) Leverage ElastiCache for caching the most frequently used data.
D) Use SOS to queue up the requests for the technical posts and deliver them out of the queue.
E) Use Route53 health checks to fail over to an S3 bucket for an error page.

ANSWER: 

A. C. E

NOTES: Leverage CloudFront, ElastiCache, Route53
Reference: CloudFront, ElastiCache, Route53

Question 35: The majority of your Infrastructure is on premises and you have a small footprint on AWS Your company has decided to roll out a new application that is heavily dependent on low latency connectivity to LOAP for authentication Your security policy requires minimal changes to the company’s existing application user management processes. What option would you implement to successfully launch this application1?

A) Create a second, independent LOAP server in AWS for your application to use for authentication
B) Establish a VPN connection so your applications can authenticate against your existing on-premises LDAP servers
C) Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication
D) Create a second LDAP domain on AWS establish a VPN connection to establish a trust relationship between your new and existing domains and use the new domain for authentication

ANSWER: 

D.

NOTES: Trust Relationship
Reference: Trust Relationship

SOURCES:

Djamga DevOps  Youtube Channel:

Prepare for Your AWS Certification Exam

2- GoCertify

SYSOPS AND SYSADMIN NEWS

SYSADMIN – SYSOPS RESOURCES

I WANT TO BECOME A SYSADMIN

This is a common topic that has been asked multiple times.

Professional/Non-technical

Sysadmin Utilities

Security

Linux

Microsoft / Windows Server

Virtualization

MacOS (formerly OSX) and Apple iOS

Google ChromeOS

Backup and Storage

Networking

Monitoring

  • Because your network and infrastructure can’t be a black box

Business and Standards Compliance

Major Vulnerabilities

Podcasts

Documentation

Testimonials:

I was initially nervous about this exam compared to SAA-C02, due to the practical labs. However, they turned out to be really easy with lots of time to fumble about, delete & recreate resources.

My labs:

Create S3 buckets, set access logs, set default encryption with KMS and create a bunch of lifecycle policies

Create a VPC with public/private subnets, create SGs, create & send flow logs to an S3 bucket.

Connect Lambda to a VPC, use RDS proxy to connect to an RDS Database. Select correct execution role for the Lambda.

Exam lab experience

I did not have any negative experiences with the lab environment (I heard a lot of horror stories), however I did take the exam at a testing center.

When you register for your SOA-C02, you gain access (via Pearson VUE E-mail) to a free sample exam lab at Login – OneLearn Training Management System – Skillable – this is the exact same testing environment you will have during the actual exam. I highly recommend you do this, especially if you’re doing the exam from home – any issues you have with the testing environment like laggy interface, copy/paste issues, etc you’ll probably also have during the exam.

Study resources

My study resources were:

Adrian Cantrill’s course

Jon Bonso’s (TutorialDojo) Practice Exams

uacantril’s courses are the best, most high quality courses I’ve ever taken for any subject.

Since I’ve done the SAA-C02 course before doing the SOA-C02 course, I was able to easily skip the shared lessons & demos (there heavy overlap between these two exams) and focus on the SOA-C02 specific topics.

uTutorials_Dojo’s practice exams are 10/10 as preparation material. They were a bit more tricky (in a ‘gotcha’ kind of way) compared to the exam questions, but they were very close to the real thing.

Study methodology

My study plan was as follows:

Study Time: 7:00-9:00 (morning) Mon-Fri, which included:

Going through Adrian’s course

Detailed notes in markdown

Doing potential exam labs in AWS console

Reading AWS official documentation (in case something is not clear)

Review Notes regularly (once course material finished)

Practice Exams

Doing exams in review mode

Delving deeper into topics I was lacking in

This was the plan, but I turned out to be somewhat inconsistent, taking the exam 3 months later than planned due to being a new father and not focusing on just one thing (also did some Python learning during the same period). But, still a pass!

Source: r/AWSCertification

AWS SYSOPS Administrator SOA-C01 Exam Prep

Latest DevOps and SysAdmin Feed

What is DevOps in Simple English?

What is a System Administrator?

DevOps: In IT world, DevOps means Development Operations. The DevOps is the bridge between the developers, the servers and the infrastructure and his main role is to automate the process of delivering code to operations.
DevOps on wikipedia: is a software development process that emphasizes communication and collaboration between product management, software development, and operations professionals. DevOps also automates the process of software integration, testing, deployment and infrastructure changes.[1][2] It aims to establish a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.

DevOps Latest Feeds


DevOps Resources

  1. What is DevOps? Tackling some frequently asked questions
  2. Find Remote DevOps Jobs here.
error: Content is protected !!