AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Azure Google Cloud Certifications Testimonials and Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
AWS Azure Google Cloud Certifications Testimonials and Dumps
AWS Developer Associates DVA-C01 PRO


AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.

Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.

Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.

Just Passed My CCP exam today (within 2 weeks)

I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book


-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)

-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)

Previous Aws knowledge:

-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)

Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)

Links to resources:

I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Passed – CCP CLF-C01


What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀

Passed with two weeks of prep (after work and weekends)

Resources Used:


    • This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.

  • Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**

    • These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.

Pass AWS CCP. The score is beyond expected

I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier.
Just a little bit of sharing, now I’ll find something to continue ^^

Good luck with your own exams.

Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.


– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).

– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.

Study stats

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

– Spent three weeks studying for the exam.

– Studied an hour to two every day.

– Solved 800-1000 practice questions.

– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.

– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.

– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.

– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.

– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.

Resources used

– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal

– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis

– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*

– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**

– One or two free practice exams found by a quick Google search

*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.

**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.


– My study regimen (i.e., an hour to two every day for three weeks) was overkill.

– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.

– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.

What would I do differently?

– Focus on practice tests only. No video lectures.

– Focus on the technologies domain. You can intuit your way through questions in the other domains.

– Chill

What are the Top 100 AWS jobs you can get with an AWS certification in 2022 plus AWS Interview Questions
AWS SAA-C02 SAA-C03 Exam Prep

Just passed SAA-C03, thoughts on it

  • Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.

  • The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.

  • It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.

I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.

Passed SAA-C03 – Feedback

Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.

I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).

Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.

My Study Guide for passing the SAA-C03 exam

Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.

First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.

SAA-C03 Exam Prep

For my exam prep, I bought the adrian cantrill video coursetutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.

TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.

For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.

The Actual SAA-C03 Exam

The actual AWS exam is almost the same with the ones in the TD tests where:

  • All of the questions are scenario-based

  • There are two (or more) valid solutions in the question, e.g:

    • Need SSL: options are ACM and self-signed URL

    • Need to store DB credentials: options are SSM Parameter Store and Secrets Manager

  • The scenarios are long-winded and asks for:

    • MOST Operationally efficient solution

    • MOST cost-effective

    • LEAST amount overhead

Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.

Another Passed SAA-C03?

Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.


– graduate with networking background

– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.

– cloud experience, short period like 3-6 months with practice

– provisioned cloud application using terraform in azure and aws

Course that I used fully:

– AWS Certified Solutions Architect – Associate (SAA-C03) | learn.cantri (

– AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path (

Course that I used partially or little:

– Ultimate AWS Certified Solutions Architect Associate (SAA) | Udemy

– Practice Exams | AWS Certified Solutions Architect Associate | Udemy

Lab that I used:

– Free tier account with cantrill instruction

– Acloudguru lab and sandbox

– Percepio lab

Comment on course:

cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more

tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.

udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.

lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal


you need to know some general exam topics like how to:

– s3 private access

– ec2 availability

– kinesis product including firehose, data stream, blabla

– iam

My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.

Good luck anyone!

Passed SAA

I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.

I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.

My company is offering stipend’s for each certification, so I’m going straight to developer next.

Recently passed SAA-C03

Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.

I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.

Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:

* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.

* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.

* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.

Not sure if they are graded or Amazon testing out new parts.


Another SAP-C01-Pass

Received my notification this morning that I passed 811.

Prep Time: 10 weeks 2hrs a day

Materials: Neil Davis videos/practice exam Jon Bonso practice exams White papers Misc YouTube videos Some hands on

Prof Experience: 4 years AWS using main services as architect


Thoughts: Exam was way more familiar to me than the Developer Exam. I use very little AWS developer tools but mainly use core AWS services. Neil’s videos were very straightforward, easy to digest, and on point. I was able to watch most of the videos on a plane flight to Vegas.

After video series I started to hit his section based exams, main exam, notes, and followed up with some hands on. I was getting destroyed on some of the exams early on and had to rewatch and research the topics, writing notes. There is a lot of nuance and fine details on the topics, you’ll see this when you take the practice exam. These little details matter.

Bonso’s exam were nothing less than awesome as per usual. Same difficulty and quality as Neil Davis. Followed the same routine with section based followed by final exam. I believe Neil said to aim for 80’s on his final exams to sit for the exam. I’d agree because that’s where I was hitting a week before the exam (mid 80’s). Both Neil and Jon exams were on par with exam difficulty if not a shade more difficult.

The exam itself was very straightforward. My experience is the questions were not overly verbose and were straight to the point as compared to the practice exams I took. I was able to quickly narrow down the questions and make a selection. Flagged 8 questions along the way and had 30min to review all my answers. Unlike some people, I didn’t feel like it was a brain melter and actually enjoyed the challenge. Maybe I’m a sadist who knows.

Advice: Follow Neil’s plan, bone up on weak areas and be confident. These questions have a pattern based upon the domain. Doing the practice exams enough will allow you to see the pattern and then research will confirm your suspicions. You can pass this exam!

Good luck to those preparing now and god speed.

AWS Developer Associate DVA-C01 Exam Prep

I Passed AWS Developer Associate Certification DVA-C01 Testimonials

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers
AWS Developer Associate DVA-C01 Exam Prep

Passed DVA-C01

Passed the certified developer associate this week.

Primary study was Stephane Maarek’s course on Udemy.

I also used the Practice Exams by Stephane Maarek and Abhishek Singh.

I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.

The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.

Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.

Cleared AWS Certified Developer – Associate (DVA-C01)


I cleared Developer associate exam yesterday. I scored 873.
Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool)
3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.

Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.

Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.

Another Passed Associate Developer Exam (DVA-C01)

Already had passed the Associate Architect Exam (SA-C03) 3 months ago, so I got much more relaxed to the exam, I did the exam with Pearson Vue at home with no problems. Used Adrian Cantrill for the course together with the TD exams.

Studied 2 weeks a 1-2 hours since there is a big overlap with the associate architect couse, even tho the exam has a different approach, more focused on the Serverless side of AWS. Lots of DynamoDB, Lambda, API Gateway, KMS, CloudFormation, SAM, SSO, Cognito (User Pool and Identity Pool), and IAM role/credentials best practices.

I do think in terms of difficulty it was a bit easier than the Associate Architect, maybe it is made up on my mind as it was my second exam so I went in a bit more relaxed.

Next step is going for the Associate Sys-Ops, I will use Adrian Cantrill and Stephane Mareek courses as it is been said that its the most difficult associate exam.

Passed the SCS-C01 Security Specialty 

Passed the SCS-C01 Security Specialty
Passed the SCS-C01 Security Specialty

Mixture of Tutorial Dojo practice exams, A Cloud Guru course, Neal Davis course & exams helped a lot. Some unexpected questions caught me off guard but with educated guessing, due to the material I studied I was able to overcome them. It’s important to understand:

  1. KMS Keys

    1. AWS Owned Keys

    2. AWS Managed KMS keys

    3. Customer Managed Keys

    4. asymmetrical

    5. symmetrical

    6. Imported key material

    7. What services can use AWS Managed Keys

  2. KMS Rotation Policies

    1. Depending on the key matters the rotation that can be applied (if possible)

  3. Key Policies

    1. Grants (temporary access)

    2. Cross-account grants

    3. Permanent Policys

    4. How permissions are distributed depending on the assigned principle

  4. IAM Policy format

    1. Principles (supported principles)

    2. Conditions

    3. Actions

    4. Allow to a service (ARN or public AWS URL)

    5. Roles

  5. Secrets Management

    1. Credential Rotation

    2. Secure String types

    3. Parameter Store

    4. AWS Secrets Manager

  6. Route 53

    1. DNSSEC

    2. DNS Logging

  7. Network

    1. AWS Network Firewall

    2. AWS WAF (some questions try to trick you into thinking AWS Shield is needed instead)

    3. AWS Shield

    4. Security Groups (Stateful)

    5. NACL (Stateless)

    6. Ephemeral Ports

    7. VPC FlowLogs

  8. AWS Config

    1. Rules

    2. Remediation (custom or AWS managed)

  9. AWS CloudTrail

    1. AWS Organization Trails

    2. Multi-Region Trails

    3. Centralized S3 Bucket for multi-account log aggregation

  10. AWS GuardDuty vs AWS Macie vs AWS Inspector vs AWS Detective vs AWS Security Hub

It gets more in depth, I’m willing to help anyone out that has questions. If you don’t mind joining my Discord to discuss amongst others to help each other out will be great. A study group community. Thanks. I had to repost because of a typo 🙁

Passed the Security Specialty

Passed Security Specialty yesterday.

Resources used were:

Adrian (for the labs), Jon (For the Test Bank),

Total time spent studying was about a week due to the overlap with the SA Pro I passed a couple weeks ago.

Now working on getting Networking Specialty before the year ends.

My longer term goal is to have all the certs by end of next year.


Advanced Networking - Specialty

Advanced Networking – Specialty

Passed AWS Certified advanced networking – Specialty ANS-C01 2 days ago


This was a tough exam.

Here’s what I used to get prepped:

Exam guide book by Kam Agahian and group of authors – this just got released and has all you need in a concise manual, it also included 3 practice exams, this is a must buy for future reference and covers ALL current exam topics including container networking, SD-WAN etc.

Stephane Maarek’s Udemy course – it is mostly up-to-date with the main exam topics including TGW, network firewall etc. To the point lectures with lots of hands-on demos which gives you just what you need, highly recommended as well!

Tutorial Dojos practice tests to drive it home – this helped me get an idea of the question wording, so I could train myself to read fast, pick out key words, compare similar answers and build confidence in my knowledge.

Crammed daily for 4 weeks (after work, I have a full time job + family) and went in and nailed it. I do have networking background (15+ years) and I am currently working as a cloud security engineer and I’m working with AWS daily, especially EKS, TGW, GWLB etc.

For those not from a networking background – it would definitely take longer to prep.

Good luck!

Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech


Passed AZ-900, SC-900, AI-900, and DP-900 within 6 weeks!

Achievement Celebration

What an exciting journey. I think AZ-900 is the hardest probably because it is my first Microsoft certification. Afterwards, the others are fair enough. AI-900 is the easiest.

I generally used Microsoft Virtual Training Day, Cloud Ready Skills, Measureup and John Savill’s videos. Having built a fundamental knowledge of the Cloud, I am planning to do AWS CCP next. Wish me luck!

Passed Azure Fundamentals

Learning Material

Hi all,

I passed my Azure fundamentals exam a couple of days ago, with a score of 900/1000. Been meaning to take the exam for a few months but I kept putting it off for various reasons. The exam was a lot easier than I thought and easier than the official Microsoft practice exams.

Study materials;

  • A Cloud Guru AZ-900 fundamentals course with practice exams

  • Official Microsoft practice exams

  • MS learning path

  • John Savill’s AZ-900 study cram, started this a day or two before my exam. (Highly Recommended)

Will be taking my AZ-104 exam next.

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep

Passed AZ-104 with about a 6 weeks prep

Learning Material

Resources =

John Savill’s AZ-104 Exam Cram + Master Class Tutorials Dojo Practice Exams

John’s content is the best out there right now for this exam IMHO. I watched the cram, then the entire master class, followed by the cram again.

The Tutorials Dojo practice exams are essential. Some questions on the actual exam where almost word-for-word what I saw on the exam.


What’s everyone using for the AZ-305? Obviously, already using John’s content, and from what I’ve read the 305 isn’t too bad.


Passed the AZ-140 today!!

Achievement Celebration

I passed the (updated?) AZ-140, AVD specialty exam today with an 844. First MS certification in the bag!

Edited to add: This video series from Azure Academy was a TON of help.

Passed DP-900

Achievement Celebration

I am pretty proud of this one. Databases are an area of IT where I haven’t spent a lot of time, and what time I have spent has been with SQL or MySQL with old school relational databases. NoSQL was kinda breaking my brain for a while.

Study Materials:

  1. Microsoft Virtual Training Day, got the voucher for the free exam. I know several people on here said that was enough for them to pass the test, but that most certainly was not enough for me.

  2. DP-900 course and practice test. They include virtual flashcards which I really liked.

  3. practice tests. I also used the course to fill in gaps in my testing.

Passed AI-900! Tips & Resources Included!!

Azure AI Fundamentals AI-900 Exam Prep
Azure AI Fundamentals AI-900 Exam Prep
Achievement Celebration

Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900.

Here’s the order in which I passed my AWS and Azure certifications:


I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can.

Here’s my study plan for AZ-900 and DP-900 exams:

  • finish a popular video course aimed at the cert

  • watch John Savill’s study/exam cram

  • take multiple practice exams scoring in 90s

This is what I used for AI-900:

  • Alan Rodrigues’ video course (includes 2 practice exams) 👌

  • John Savill’s study cram 💪

  • practice exams by Scott Duffy and in 28Minutes Official 👍

  • knowledge checks in AI modules from MS learn docs 🙌

I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes. 👏

Just be clear on the topics explained by the above video and you’ll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam

Just passed AZ-104

Achievement Celebration

I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs.

Received very little of this:

  • Containers

  • Storage

  • Monitoring

I passed with a 710 but a pass is a pass haha.

Used tutorial dojos but the closest questions I found where in the Udemy testing exams.


Passed GCP Professional Cloud Architect

Google Professional Cloud Architect Practice Exam 2022
Google Professional Cloud Architect Practice Exam 2022

First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.

I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.

In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.

As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.

I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.

One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.

I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.

All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.

GCP Associate Cloud Engineer Exam Prep

Passed GCP: Cloud Digital Leader

Hi everyone,

First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience.


I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet.

AGU also has three – 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren’t that hard).

I don’t know if someone could pass the test if they just watched the videos on Google Cloud’s certification site, especially if you had no experience with GCP.

Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I’m working on my CCSP. After taking the test, I realized I way over prepared.

Exam Center

It was my first time at this testing center and I wasn’t happy with the experience. A few of the issues I had are:

– My personal items (phone, keys) were placed in an unlocked filing cabinet

– My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space

– The laminated sheet they gave me looked like someone had spilled Kool-Aid on it

– They only offered earplugs, instead of noise cancelling headphones


My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do.

I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer.

Passed the Google Cloud: Associate Cloud Engineer

Hey all, I was able to pass the Google Cloud: Associate Cloud Engineer exam in 27 days.

I studied about 3-5 hours every single day.

I created this note to share with the resources I used to pass the exam.

Happy studying!

GCP ACE Exam Aced

Hi folks,

I am glad to share with you that I have cleared by GCP ACE exam today and would like to share my preparation with you:

1)I completed these courses from Coursera:

1.1 Google Cloud Platform Fundamentals – Core Infrastructure

1.2 Essential Cloud Infrastructure: Foundation

1.3 Essential Cloud Infrastructure: Core Services

1.4 Elastic Google Cloud Infrastructure: Scaling and Automation

Post these courses, I did couple of QwikLab courses as listed in orderly manner:

2 Getting Started: Create and Manage Cloud Resources (Qwiklabs Quest)

   2.1 A Tour of Qwiklabs and Google Cloud

   2.2 Creating a Virtual Machine

   2.2 Compute Engine: Qwik Start – Windows

   2.3 Getting Started with Cloud Shell and gcloud

   2.4 Kubernetes Engine: Qwik Start

   2.5 Set Up Network and HTTP Load Balancers

   2.6 Create and Manage Cloud Resources: Challenge Lab

 3 Set up and Configure a Cloud Environment in Google Cloud (Qwiklabs Quest)

   3.1 Cloud IAM: Qwik Start

   3.2 Introduction to SQL for BigQuery and Cloud SQL

   3.3 Multiple VPC Networks

   3.4 Cloud Monitoring: Qwik Start

   3.5 Deployment Manager – Full Production [ACE]

   3.6 Managing Deployments Using Kubernetes Engine

   3.7 Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab

 4 Kubernetes in Google Cloud (Qwiklabs Quest)

   4.1 Introduction to Docker

   4.2 Kubernetes Engine: Qwik Start

   4.3 Orchestrating the Cloud with Kubernetes

   4.4 Managing Deployments Using Kubernetes Engine

   4.5 Continuous Delivery with Jenkins in Kubernetes Engine

Post these courses I did the following for mock exam preparation:

  1. Jon Bonso Tutorial Dojo -GCP ACE preparation

  2. Udemy course:

And yes folks this took me 3 months to prepare. So take your time and prepare it.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900 #DP900 #GCPACE

Comparison of AWS vs Azure vs Google

Cloud computing has revolutionized the way companies develop applications. Most of the modern applications are now cloud native. Undoubtedly, the cloud offers immense benefits like reduced infrastructure maintenance, increased availability, cost reduction, and many others.

However, which cloud vendor to choose, is a challenge in itself. If we look at the horizon of cloud computing, the three main providers that come to mind are AWS, Azure, and Google cloud. Today, we will compare the top three cloud giants and see how they differ. We will compare their services, specialty, and pros and cons. After reading this article, you will be able to decide which cloud vendor is best suited to your needs and why.

History and establishment


AWS is the oldest player in the market, operating since 2006. Here’s a brief history of AWS and how computing has changed. Being the first in the cloud industry, it has gained a particular advantage over its competitors. It offers more than 200+ services to its users. Some of its notable clients include:

  • Netflix
  • Expedia
  • Airbnb
  • Coursera
  • FDA
  • Coca Cola


Azure by Microsoft started in 2010. Although it started four years later than AWS, it is catching up quite fast. Azure is Microsoft’s public cloud platform which is why many companies prefer to use Azure for their Microsoft-based applications. It also offers more than 200 services and products. Some of its prominent clients include:

  • HP
  • Asus
  • Mitsubishi
  • 3M
  • Starbucks
  • CDC (Center of Disease Control) USA
  • National health service (NHS) UK


Google Cloud also started in 2010. Its arsenal of cloud services is relatively smaller compared to AWS or Azure. It offers around 100+ services. However, its services are robust, and many companies embrace Google cloud for its specialty services. Some of its noteworthy clients include:

  • PayPal
  • UPS
  • Toyota
  • Twitter
  • Spotify
  • Unilever

Market share & growth rate

If you look at the market share and growth chart below, you will notice that AWS has been leading for more than four years. Azure is also expanding fast, but it is still has a long way to go to catch up with AWS.

However, in terms of revenue, Azure is ahead of AWS. In Q1 2022, AWS revenue was $18.44 billion; Azure earned $23.4 billion, while Google cloud earned $5.8 billion.

Availability Zones (Data Centers)

When comparing cloud vendors, it is essential to see how many regions and availability zones are offered. Here is a quick comparison between all three cloud vendors in terms of regions and data centers:


AWS operates in 25 regions and 81 availability zones. It offers 218+ edge locations and 12 regional edge caches as well. You can utilize the edge location and edge caches in services like AWS Cloudfront and global accelerator, etc.


Azure has 66 regions worldwide and a minimum of three availability zones in each region. It also offers more than 116 edge locations.


Google has a presence in 27 regions and 82 availability zones. It also offers 146 edge locations.

Although all three cloud giants are continuously expanding. Both AWS and Azure offer data centers in China to specifically cater for Chinese consumers. At the same time, Azure seems to have broader coverage than its competitors.

Comparison of common cloud services

Let’s look at the standard cloud services offered by these vendors.


Amazon’s primary compute offering is EC2 instances, which are very easy to operate. Amazon also provides a low-cost option called “Amazon lightsail” which is a perfect fit for those who are new to computing and have a limited budget. AWS charges for EC2 instances only when you are using them. Azure’s compute offering is also based on virtual machines. Google is no different and offers virtual machines in Google’s data centers. Here’s a brief comparison of compute offerings of all three vendors:


All three vendors offer various forms of storage, including object-based storage, cold storage, file-based storage, and block-based storage. Here’s a brief comparison of all three:


All three vendors support managed services for databases. They also offer NoSQL as well as document-based databases. AWS also provides a proprietary RDBMS named “Aurora”, a highly scalable and fast database offering for both MySQL and PostGreSQL. Here’s a brief comparison of all three vendors:

Comparison of Specialized services

All three major cloud providers are competing with each other in the latest technologies. Some notable areas of competition include ML/AI, robotics, DevOps, IoT, VR/Gaming, etc. Here are some of the key specialties of all three vendors.


Being the first and only one in the cloud market has many benefits, and Amazon has certainly taken advantage of that. Amazon has advanced specifically in AI and machine learning related tools. AWS DeepLens is an AI-powered camera that you can use to develop and deploy machine learning algorithms. It helps you with OCR and image recognition. Similarly, Amazon has launched an open source library called “Gluon” which helps with deep learning and neural networks. You can use this library to learn how neural networks work, even if you lack any technical background. Another service that Amazon offers is SageMaker. You can use SageMaker to train and deploy your machine learning models. It contains the Lex conversational interface, which is the backbone of Alexa, Lambda, and Greengrass IoT messaging services.

Another unique (and recent) offering from AWS is IoT twinmaker. This service can create digital twins of real-world systems like factories, buildings, production lines, etc.

AWS is even providing a service for Quantum computing called AWS Braket.


Azure excels where you are already using some Microsoft products, especially on-premises Microsoft products. Organizations already using Microsoft products prefer to use Azure instead of other cloud vendors because Azure offers a better and more robust integration with Microsoft products.

Azure has excellent services related to ML/AI and cognitive services. Some notable services include Bing web search API, Face API, Computer vision API, text analytics API, etc.


Google is the current leader of all cloud providers regarding AI. This is because of their open-source Google library TensorFlow, the most popular library for developing machine learning applications. Vertex AI and BigQueryOmni are also beneficial services offered lately. Similarly, Google offers rich services for NLP, translation, speech, etc.

Pros and Cons

Let’s summarize the pros and cons for all three cloud vendors:



  • An extensive list of services
  • Huge market share
  • Support for large businesses
  • Global reach


  • Pricing model. Many companies struggle to understand the cost structure. Although AWS has improved the UX of its cost-related reporting in the AWS console, many companies still hesitate to use AWS because of a perceived lack of cost transparency



  • Excellent integration with Microsoft tools and software
  • Broader feature set
  • Support for open source


  • Geared towards enterprise customers



  • Strong integration with open source tools
  • Flexible contracts
  • Good DevOps services
  • The most cost-efficient
  • The preferred choice for startups
  • Good ML/AI-based services


  • A limited number of services as compared to AWS and Azure
  • Limited support for enterprise use cases

Career Prospects

Keen to learn which vendor’s cloud certification you should go for ? Here is a brief comparison of the top three cloud certifications and their related career prospects:


As mentioned earlier, AWS has the largest market share compared to other cloud vendors. That means more companies are using AWS, and there are more vacancies in the market for AWS-certified professionals. Here are main reasons why you would choose to learn AWS:


Azure is the second largest cloud service provider. It is ideal for companies that are already using Microsoft products. Here are the top reasons why you would choose to learn Azure:

  • Ideal for experienced user of Microsoft services
  • Azure certifications rank among the top paying IT certifications
  • If you’re applying for a company that primarily uses Microsoft Services


Although Google is considered an underdog in the cloud market, it is slowly catching up. Here’s why you may choose to learn GCP.

  • While there are fewer job postings, there is also less competition in the market
  • GCP certifications rank among the top paying IT certifications

Most valuable IT Certifications

Keen to learn about the top paying cloud certifications and jobs? If you look at the annual salary figures below, you can see the average salary for different cloud vendors and IT companies, no wonder AWS is on top. A GCP cloud architect is also one of the top five. The Azure architect comes at #9.

Which cloud certification to choose depends mainly on your career goals and what type of organization you want to work for. No cloud certification path is better than the other. What matters most is getting started and making progress towards your career goals. Even if you decide at a later point in time to switch to a different cloud provider, you’ll still benefit from what you previously learned.

Over time, you may decide to get certified in all three – so you can provide solutions that vary from one cloud service provider to the next.

Don’t get stuck in analysis-paralysis! If in doubt, simply get started with AWS certifications that are the most sought-after in the market – especially if you are at the very beginning of your cloud journey. The good news is that you can become an AWS expert when enrolling in our value-packed training.

Further Reading

You may also be interested in the following articles:


Get it on Apple Books
Get it on Apple Books

  • Recently Passed Google Cloud Professional Data Engineer Exam? Need Your Advice!
    by /u/AB3NZ (Google Cloud Platform Certification) on May 24, 2024 at 11:40 pm

    Hey all, I've booked my Google Cloud Professional Data Engineer exam for July 15th and I'm looking for some guidance from those who have recently passed. Could you share your experience and tips on how to best prepare? Specifically, I'd love to know: What study resources did you find most valuable? (Courses, books, practice exams, etc.) How long did you study, and what was your daily/weekly study routine like? Were there any particular areas of the exam you found especially challenging? Any other advice or insights you'd like to offer? Any help would be greatly appreciated! Thanks in advance! submitted by /u/AB3NZ [link] [comments]

  • Phone registration error
    by /u/Ayesha12345678910 (Microsoft Azure Certifications) on May 24, 2024 at 4:19 pm

    Trying to register for Az-900, keep getting invalid phone number error, the country,code,city and state fields match.can someone help me.thanks submitted by /u/Ayesha12345678910 [link] [comments]

  • Open book exam
    by /u/Mr-Wedge01 (Microsoft Azure Certifications) on May 24, 2024 at 3:27 pm

    Hey, I know that some of the exams are open book, so we can use ms Learn. I have heard that copilot is available, is that true ? submitted by /u/Mr-Wedge01 [link] [comments]

  • AZ900 exam content
    by /u/yourdingdong69 (Microsoft Azure Certifications) on May 24, 2024 at 6:57 am

    I have an AZ900 exam soon, I have studied the recommended modules from Microsoft Learn and have been giving practice exams 2-4 times a day. I have noticed that in the practice exams the questions repeat a lot and topics related to AI, DevOps, Firewall, NSG, ASG etc. are not covered. Whereas in a YouTube tutorial series by Adam Marczak they are all covered at good depth. I am curious to whether the content of AZ900 have been reduced vastly or the practice set do not represent actual exam at all? Also, please suggest me some quality resources for better practice. submitted by /u/yourdingdong69 [link] [comments]

  • No appointment in
    by /u/yourdingdong69 (Microsoft Azure Certifications) on May 24, 2024 at 4:01 am

    I made an appointment for AZ900 exam but it didn't appear on my I wanted to reschedule it upon the instructions of my proctor but I have no options now. How to get around it? and, whom to contact for support, I have email pearsonVue for this matter. Solved: I scheduled an AI900 exam with a free coupon, luckily it appeared on my Microsoft Learn profile. Then, I clicked on 'Go To Exam' for AI900 and there it was AZ900 exam schedule and re-schedule option, I rescheduled it from there. And after rescheduling it also appeared on my Learn profile. submitted by /u/yourdingdong69 [link] [comments]

  • Coursera AZ-900 course out dated?
    by /u/sci_llustratorart (Microsoft Azure Certifications) on May 24, 2024 at 3:39 am

    Hi everyone! I'm looking to get into cloud security engineer but first I need the fundamentals as well as the Administrator. Today I just signed up for Coursera's Introduction to Microsoft Azure Fundamentals. On the first week for the 1st reading section, it tells you how the skills needed are measured on the exam and the last time it has been updated which was on November 9, 2020. While on the Microsoft website the skills measured says its been updated in January this year 2024. Here are the differences: Coursera Introduction to Microsoft Azure Fundamentals Microsoft Certified: Azure Fundamentals Describe Cloud Concepts (20-25%) Describe cloud concepts 25-30% Describe Core Azure Services (15-20%)Describe Core Azure Services (15-20%) Describe Azure architecture and services 35-40% Describe core solutions and management tools on Azure (10-15%) Describe Azure management and governance 30-35% Describe general security and network security features (10-15%) Describe identity, governance, privacy, and compliance features (20- 25%) Describe Azure cost management and Service Level Agreements (10- 15%) My question is, is it still worth it taking the coursera course? I do not want to waste so much time on lessons that are outdated or miss out on any information that I need for the exam. But taking classes with videos like coursera help me stay focused rather than self reading all the time. submitted by /u/sci_llustratorart [link] [comments]

  • Passed AZ-900 today
    by /u/deathtrap02 (Microsoft Azure Certifications) on May 24, 2024 at 3:31 am

    Just passed the AZ-900 today. My references are MS learn, freecodecamp Azure fundamentals, and inside cloud security YouTube. My practice exam is from Tutorials Dojo. submitted by /u/deathtrap02 [link] [comments]

  • Passed AZ-104 Today
    by /u/Diarrhea_Mike (Microsoft Azure Certifications) on May 23, 2024 at 10:18 pm

    Sat AZ-104 today and passed on the first try. I studied for about 1.5 months total for 2-3 hours a day not counting weekends. Took my first practice test on MS Learn on 4/3/2024 - Got 38%. (This is before studying of any kind). Went through AZ-104 Learning Path on MS Learn. Watched John Savill's AZ-104 Cram v2 at 1.5x speed. Went through Alan Rodrigues' course on Udemy at 1.75x speed. Started taking Practice Tests on MS Learn and Tutorial Dojo. Kept taking practice tests until my score got above 80%. Supplemented w/ MS Learn for topics/categories I was weak. 5/23 Sat the exam and passed with 800/1000 on the first try. I have a bit over 10 YoE of IT experience in general. While I do have some exposure to Azure it is not something I work with on a daily basis. Use the MS Learn functionality in the exam and manage your time well. Overall not the easiest exam but manageable. submitted by /u/Diarrhea_Mike [link] [comments]

  • Ace certification material
    by /u/BB__uwu (Google Cloud Platform Certification) on May 23, 2024 at 10:13 pm

    Hey there, fellow Redditors! Just wanted to share my recent experience with the GCP Associate Cloud Engineer certification exam. I'm happy to report that I passed with ease thanks to some exclusive study materials I acquired. I have got Actual questions and answers that came in six papers of this exam, and not a single one outside of those will be asked during the your test! You can get your hands on these valuable resources and feel confident walking into the exam room. Interested? Hit me up for more details! 📣 submitted by /u/BB__uwu [link] [comments]

  • Final Preparations for the AZ-104
    by /u/Zzaty (Microsoft Azure Certifications) on May 23, 2024 at 7:56 pm

    I'm about to take the AZ-104 exam very soon, and I have been preparing with as much source material as possible. I have been using primarily IT Pro TV as my source of video content and using their Kaplan Learn Practice I recently switch to Tutorial Dojo as I saw it a lot in this subreddit. I have also done the free Microsoft Practice Exam they have on their website. I do feel confident for the exam as I have been studying preparing for this the past ~6months and have about 1.5 years experience with Azure. My overall question is simply, is the source material that i've been using good enough. Is there any last minute info I could use to prepare for this exam. submitted by /u/Zzaty [link] [comments]

  • Different Visa Card
    by /u/CoolAppointment7961 (Microsoft Azure Certifications) on May 23, 2024 at 6:16 pm

    Hello, I'm planning to take the DP-900 exam online, and I was asked to give my visa card information, however, I don't have a visa card nor do I have a bank account, is there a way to pay for the exam using an outsider's bank account? or do I have to create my own bank account and my own visa card? it is said in the payment page of the exam that the info I've inserted must match my visa card info. submitted by /u/CoolAppointment7961 [link] [comments]

  • SC-200 Guid
    by /u/TetrisGucci (Microsoft Azure Certifications) on May 23, 2024 at 4:20 pm

    Hi there, I recently finished my Uni and also completed SC 900 and passed as well. Now my next focus is on SC 200. I have gone through the Microsoft Learn notes and tbh it’s huge with its content, especially KQL related. I wanted to see if there was any YouTube videos available but none, LinkedIn Learning had some outdated ones. Is there is any guides or notes that have taken on this specific exam and the modules so I could understand and work my way through it? Would be great help 🙂 Thank! submitted by /u/TetrisGucci [link] [comments]

  • Is MDT really gone from MD 102?
    by /u/Logical_Strain_6165 (Microsoft Azure Certifications) on May 23, 2024 at 2:56 pm

    I see on Microsoft's site that its been deleted from MD-102. Has anyone done the exam recently and found there were no questions about it? submitted by /u/Logical_Strain_6165 [link] [comments]

  • I cannot access my exam voucher AI-900
    by /u/hawfbottle (Microsoft Azure Certifications) on May 23, 2024 at 1:09 pm

    I have completed AI Skills Challenge: Azure AI Fundamentals and have received an email advising that I am eligible for an exam voucher and I have until June 24th to redeem it. However, when i try to get access to my voucher i get this error: "No discounts available for this email address. If you have attended an event that provides an exam offer, please check the terms of the offer. Some events may require at least 4 business days following the event to record eligibility. In addition, your organization may not be eligible for exams discounts." When I access my learn profile it shows that I have completed the challenge and this is also the email which received the notification about using the voucher before June 24th. Any help is appreciated. submitted by /u/hawfbottle [link] [comments]

  • AZ700 Exam Prep Advice.
    by /u/MarchOk2356 (Microsoft Azure Certifications) on May 23, 2024 at 11:39 am

    Hi All, I'm about to start studying for the AZ-700 exam. I plan to begin with John Savill's playlist and Microsoft Learn, but I'm looking to purchase a course with labs afterward. Does anyone have advice on practice tests? I couldn't find any on Tutorial Dojo. Any general advice would be a huge help! Thanks in advance! submitted by /u/MarchOk2356 [link] [comments]

  • Azure AZ-104 and beyond
    by /u/freddy91761 (Microsoft Azure Certifications) on May 23, 2024 at 10:08 am

    I would like to take the AZ-104, md-102 and az-500 before Sept. My learning platform is MS learn and Pluralsight.Is 4 month enough to get those certs? submitted by /u/freddy91761 [link] [comments]

  • Have you found the WhizLabs mock exams to be more difficult than the actual exam?
    by /u/BlockBlister22 (Microsoft Azure Certifications) on May 23, 2024 at 9:23 am

    I am writing DP-900 in a few weeks. I first did a Udemy course by Scott Duffy and purchased some mock exams from him too. I am almost finished the MS learn path - just have to do the big labs. I have already purchased the DP-900 mock exams from WhizLabs as they were on special. In your experience, are the WhizLabs exams more/less/or same difficulty as the actual exam? I'm asking in general for Azure exams, not specifically for DP-900, but if you have specific experience with DP-900 and WhizLabs then that would be great. Thanks! It will be my first MS exam, so I am very new to it all. submitted by /u/BlockBlister22 [link] [comments]

  • Advice needed
    by /u/WarmWinter8 (Microsoft Azure Certifications) on May 23, 2024 at 8:14 am

    Hello! I was redirected here by the good folks of r/Azure. Anyway, a little bit about me. I work in the atuomation industry and i'm using C# and WPF for the most of the time. I'm looking to transition into Azure. Could any kind soul please perhaps tell me briefly what learning path do i need to undertake and the stuff that i need to know? So far, i've notice two certifications that i could take, namely 900 and 104. Are those two certification sufficient for me to land an entry level job in Azure? Basically, i would like to map out the path first before deciding to take the jump. Thanks! submitted by /u/WarmWinter8 [link] [comments]

  • Got a certification discount because of previous employer, now I'm worried!
    by /u/masterofrants (Microsoft Azure Certifications) on May 23, 2024 at 2:22 am

    Hope you all having a great day! So today I booked my Azure 104 exam and saw that I still got the 50% discount due to my account being connected to my previous employer. I quit that job last year in nov and it's funny that they have still not removed the connection or whatever. So how it works is my Ms account on my Gmail email address was connected to my previous employer account in the step when they ask you to check for discounts. For now I've taken the discount but I'm worried if this can become a problem for me in the future. It's crazy how these trillion dollar Tech Giants have such loopholes lol. Anybody got any ideas about this!? submitted by /u/masterofrants [link] [comments]

  • Microsoft ESI Classes
    by /u/According_Ice6515 (Microsoft Azure Certifications) on May 22, 2024 at 10:35 pm

    Hello. For people who have taken free Microsoft courses through ESI with a live instructor, I was wondering what was your experience? Was it boring or was it very enjoyable? And also, what was the class you took and did it include access to labs on demand? submitted by /u/According_Ice6515 [link] [comments]

Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03

How do we know that the Top 3 Voice Recognition Devices like Siri Alexa and Ok Google are not spying on us?

DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

DevOps Interviews Question and Answers and Scripts

Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).

Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:


What is a Canary Deployment?

A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.

Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

DevOps Interviews Question and Answers and Scripts
AWS Developer Associate DVA-C01 Exam Prep
Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep #Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure ios: android: windows 10/11: web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (

What is a Blue Green Deployment?

Reference: Blue Green Deployment

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.

How to a  software release?

There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

How to automate the whole build and release process?

• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
The above process will be done by Jenkins by creating the jobs.

Did you ever participated in Prod Deployments? If yes what is the procedure?

• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system

Devops Tools and Concepts

What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?

By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.

The key aspects or principles behind DevOps are:

  • Infrastructure as a Code
  • Continuous Integration
  • Continuous Deployment
  • Automation
  • Continuous Monitoring
  • Security

Popular tools for DevOps are:

  • Git
  • AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
  • Jenkins
  • Ansible
  • Puppet
  • Nagios
  • Docker
  • ELK (Elasticsearch, Logstash, Kibana)

Can we consider DevOps as Agile methodology?

Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.

What are some of the most popular DevOps tools?

What is the job Of HTTP REST API in DevOps?

As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.

What is Scrum?

Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team

What are Micro services, and how they control proficient DevOps rehearses?

Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.

What is Continuous Delivery?

Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.

Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Devops Continuous Integration vs Continuous delivery

Why Automate?

Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.

What is Puppet?

Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.

What is Configuration Management?

Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.

Software Configuration Management Features are:

• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes

What are the Some Of the Most Popular Devops Tools ?

• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible

What Are the Vagrant And Its Uses?

Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.

What’s a PTR in DNS?

Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.

What testing is necessary to insure a new service is ready for production?

Continuous testing

What is Continuous Testing?

It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.

What are the key elements of continuous testing?

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.

How does HTTP work?

The HTTP protocol  works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that  respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.

What is IaC? How you will achieve this?

Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.

Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.

What are patterns and anti-patterns of software delivery and deployment?

What are patterns and anti-patterns of software delivery and deployment?

What are Microservices?

Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.

  • Small
  • Decoupled
  • Owned by self-contained teams

Version Control

What is a version control system?

Version Control System (VCS) is a software that helps software developers to work together and maintain
  complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN

What is Source Control?

An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.

Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can

• Collaborate on code with your team.

• Isolate your work until it is ready.

. Quickly troubleshoot issues by identifying who made changes and what the changes were.

Source code management systems help streamline the development process and provide a centralized source for all your code.

What is Git and explain the difference between Git and SVN?

Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.

Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Git contains the local repo as well as the full history of the whole project on all the developers hard drive, so if there is a server outage , you can easily do recovery from your team mates local git repo. SVN relies only on the central server to store all the versions of the project file
Push and pull operations are fast Push and pull operations are slower compared to Git
It belongs to 3rd generation Version Control Tool It belongs to 2nd generation Version Control tools
Client nodes can share the entire repositories on their local system Version history is stored on server-side repository
Commits can be done offline too Commits can be done only online
Work are shared automatically by commit Nothing is shared automatically

Describe branching strategies?

Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.

Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.

Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.

What are Pull requests?

Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements.
Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.


What is the default file permissions for the file and how can I modify it?

Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666

What is a  kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.

What is difference between grep -i and grep -v?

i ignore alphabet difference v accept this value
Example:  ls | grep -i docker
ls | grep -v docker
You can’t see anything with name docker.tar.gz

How can you define particular space to the file?

This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1

What is concept of sudo in linux?

Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.

What are the checks to be done when a Linux build server become suddenly slow?

Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time


What is Jenkins?

Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team

What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool

What is continuous integration?

When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

Devops: Continuous Integration

Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.

The microservices architecture is a design approach to build a single application as a set of small services.

What are the advantages of Jenkins?

• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.

Which SCM tools does Jenkins supports?

Source code management tools supported by Jenkins are below:
• AccuRev
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase

I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs

In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.

How can I integrate all the tools with Jenkins?

I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.

How to install Jenkins via Docker?

The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:

• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard 
• Accessing the Jenkins/Blue Ocean Docker container:

docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:

docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash

Bash – Shell scripting

Write a shell script to add two numbers

echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”

How to get a file that consists of last 10 lines of the some other file?

Tail -10 filename >filename

How to check the exit status of the commands?

echo $?

How to get the information from file which consists of the word “GangBoard”?

grep “GangBoard” filename

How to search the files with the name of “GangBoard”?

find / -type f -name “*GangBoard*”

Write a shell script to print only prime numbers?

DevOps script to print prime numbers

How to pass the parameters to the script and how can I get those parameters? parameter1 parameter2
Use  $* to get the parameters.

Monitoring – Refactoring

My application is not coming up for some reason? How can you bring it up?

We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.

What is multifactor authentication? What is the use of it?

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.

• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform

I want to copy the artifacts from one location to another location in cloud. How?

Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.

How to  delete 10 days older log files?

find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null


What are the Advantages of Ansible?

• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural

What’s the use of Ansible?

Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.

What are the Pros and Cons of Ansible?

1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage

What is the difference among chef, puppet and ansible?

Ansible Supports Windows but server should be Linux/Unix YAML (Python) Single Active Node
Chef Puppet
Interoperability Works Only on Linux/Unix Works Only on Linux/Unix
Configuration Laguage Uses Ruby Pupper DSL
Availability Primary Server and Backup Server Multi Master Architecture

How to access variable names in Ansible?

Using hostvars method we can access and add the variables like below

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}


What is Docker?

Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.

What is Docker image?

Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.

What is a Docker Container?

Docker Container is the running instance of Docker Image

How to stop and restart the Docker container?

To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+

• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace

Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows

What are the tools used for docker networking?

For docker networking we generally use kubernets and docker swarm.

What is docker compose?

Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.

How to deploy docker container to aws?

Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.

What is the fundamental disservice of Docker holders?

As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.

What are the docker motor and docker form?

Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP

What are the Different modes does a holder can be run?

Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.

What the yield of docker assess order will be?

Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type

What is docker swarm?

Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.

What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?

Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.

How to Version control Docker pictures?

Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.

What is difference between docker image and docker container?

Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.

What is Application Containerization?

It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.

What is the syntax for building docker image?

docker build –f -t imagename:version

What is the running docker image?

docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version

How to log into a container?

docker exec –it /bin/bash


What does the commit object contain?

Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).

Explain the difference between git pull and git fetch?

Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge

How do we know in Git if a branch has already been merged into master?

git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged

What is ‘Staging Area’ or ‘Index’ in GIT?

Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add

What is Git Stash?

Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.

What is Git stash drop?

Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log

What is the function of ‘git config’?

Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global “Your Name”: This command will add your username.
git config –global “Your E-mail Address”: This command will add your email id.

How can you create a repository in Git?

To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.

What language is used in Git?

Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.

What is SubGit?

SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.

How can you clone a Git repository via Jenkins?

First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.

What are the advantages of using Git?

1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.

What is git add?

It adds the file changes to the staging area

What is git commit? 

Commits the changes to the HEAD (staging area)

What is git push?

Sends the changes to the remote repository

What is git checkout?

Switch branch or restore working files

What is git branch?

Creates a branch

What is git fetch?

Fetch the latest history from the remote server and updates the local repo

What is git merge?

Joins two or more branches together

What is git pull?

Fetch from and integrate with another repository or a local branch (git fetch + git merge

What is git rebase?

Process of moving or combining a sequence of commits to a new base commit

What is git revert?

To revert a commit that has already been published and made public

What is git clone?

Clones the git repository and creates a working copy in the local machine

How can I modify the commit message in git?

I have to use following command and enter the required message.
Git commit –amend

How you handle the merge conflicts in git

Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.

What is Git command to send the modifications to the master branch of your remote repository

Use the command “git push origin master”


What are the benefits of NoSQL database on RDBMS?

1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected


What is Maven?

Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories


What is Numpy

There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc. 

Why is python numpy better than lists?

Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.

Describe the map function in Python?

The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.

How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy


import numpy as np

will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1
python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1

How to count the occurrence of each value in a numpy array?

Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.

Ouput: [4 1 1 1 2 3 0 0 0 2]

Does Numpy Support Nan?

nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax

What does ravel() function in numpy do? 

It combines multiple numpy arrays into a single array

How to remove from one array those items that exist in another? 

>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])

How to reverse a numpy array in the most efficient way?

>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]

How to calculate percentiles when using numpy?

>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)

Output:  37.400000000000006

What Is The Difference Between Numpy And Scipy?

NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.

What Is The Preferred Way To Check For An Empty (zero Element) Array?

For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size

What Is The Difference Between Matrices And Arrays?

Matrices can only be two-dimensional, whereas arrays can have any number of

How can you find the indices of an array where a condition is true?

Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)

How to find the maximum and minimum value of a given flattened array?

>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)

Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis. 

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)

Find median of a numpy flattened array

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)

Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis

>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x

Calculate covariance matrix between two numpy arrays

>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)

Compute  product-moment correlation coefficients of two given numpy arrays

>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)

Develop a numpy program to compute the histogram of nums against the bins

>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)

Get the powers of an array values element-wise

>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)

Write a NumPy program to get true division of the element-wise array inputs

>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)


What is a series in pandas?

A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.

What features make Pandas such a reliable option to store tabular data?

Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.

What is re-indexing in pandas?

Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.

How will you create a series from dict in Pandas?

A Series is defined as a one-dimensional array that is capable of storing various data

import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)

How can we create a copy of the series in Pandas?

Use pandas.Series.copy method
import pandas as pd


What is groupby in Pandas?

GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and

What is vectorization in Pandas?

Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.

Different types of Data Structures in Pandas

Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.

What Is Time Series In pandas

A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.

How to convert pandas dataframe to numpy array?

The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.

Write a Pandas program to get the first 5 rows of a given DataFrame

>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]

Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data. 

>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])

Write a Python program to convert a Panda module Series to Python list and it’s type. 

>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())

Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2

Develop a Pandas program to compare the elements of the two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2

Develop a Pandas program to change the data type of given a column or a Series.

>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2

Write a Pandas program to convert Series of lists to one Series

>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)

Write a Pandas program to create a subset of a given series based on value and condition

>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s

Develop a Pandas code to alter the order of index in a given series

>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])

Write a Pandas code to get the items of a given series not present in another given series.

>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result

What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?

First one is a view of the original dataframe and second one is a copy of the original dataframe.

Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.

>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’

Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.

>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)

How will you add a column to a pandas DataFrame?

# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])

How to iterate over a Pandas DataFrame?

You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.


What type of language is python? Programming or scripting?

Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.

Is python case sensitive?

Yes, python is a case sensitive language.

What is a lambda function in python?

An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.

What is the difference between xrange and xrange in python?

xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.

What are docstrings in python?

Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.

Whenever Python exits, why isn’t all the memory deallocated?

Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.

What does this mean: *args, **kwargs? And why would we use it?

We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.

What is the difference between deep and shallow copy?

Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.

Define encapsulation in Python?

Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.

Does python make use of access specifiers?

Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.

What are the generators in Python?

Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.

Write a Python script to Python to find palindrome of a sequence

a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
print (“not palindrome”)

How will you remove the duplicate elements from the given list?

The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))

Does Python allow arguments Pass by Value or Pass by Reference?

Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.

What is slicing in Python?

Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.

Why is the “pass” keyword used in Python?

The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.

What are decorators in Python?

Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion

What is the key difference between lists and tuples in python?

The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.

What is self in Python?

Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.

What is PYTHONPATH in Python?

PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.

What is the difference between .py and .pyc files?

.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.

What is namespace in Python?

In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.

What is pickling and unpickling?

Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

How is Python interpreted?

Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.

Jupyter Notebook

What is the main use of a Jupyter notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.

How do I increase the cell width of the Jupyter/ipython notebook in my browser?

>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))

How do I convert an IPython Notebook into a Python file via command line?

>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb

How to measure execution time in a jupyter notebook?

>> %%time is inbuilt magic command

How to run a jupyter notebook from the command line?

>> jupyter nbconvert –to python nb.ipynb

How to make inline plots larger in jupyter notebooks?

Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)

How to display multiple images in a jupyter notebook?

>>for ima in images:

Why is the Jupyter notebook interactive code and data exploration friendly?

The ipywidgets package provides many common user interface controls for exploring code and data interactively.

What is the default formatting option in jupyter notebook?

Default formatting option is markdown

What are kernel wrappers in jupyter?

Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.

What are the advantages of custom magic commands?

Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.

Is the jupyter architecture language dependent?

No. It is language independent

Which tools allow jupyter notebooks to easily convert to pdf and html?

Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.

What is a major disadvantage of a Jupyter notebook?

It is very hard to run long asynchronous tasks. Less Secure.

In which domain is the jupyter notebook widely used?

It is mainly used for data analysis and machine learning related tasks.

What are alternatives to jupyter notebook?

PyCharm interact, VS Code Python Interactive etc.

Where can you make configuration changes to the jupyter notebook?

In the config file located at ~/.ipython/profile_default/

Which magic command is used to run python code from jupyter notebook?

%run can execute python code from .py files

How to pass variables across the notebooks in Jupyter?

The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)

Export the contents of a cell/Show the contents of an external script

Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.

What inbuilt tool we use for debugging python code in a jupyter notebook?

Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.

How to make high resolution plots in a jupyter notebook?

>> %config InlineBackend.figure_format =’retina’

How can one use latex in a jupyter notebook?

When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.

What is a jupyter lab?

It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.

What is the biggest limitation for a Jupyter notebook?

Code versioning, management and debugging is not scalable in current jupyter notebook

Cloud Computing

[appbox googleplay]

[appbox appstore id1560083470-iphone screenshots]

Which are the different layers that define cloud architecture?

Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus

Explain Cloud Service Models?

Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)

What are Hybrid clouds?

Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.

Explain Platform as a Service (Paas)?

It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.

What is the difference in cloud computing and Mobile Cloud computing?

Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.

What are the security aspects provided with the cloud?

There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications

What are system integrators in cloud computing?

System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.

What is the usage of utility computing?

Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.

What are some large cloud providers and databases?

Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL

Explain the difference between cloud and traditional data centers.

In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.

What is hypervisor in Cloud Computing?

It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.

Define what MultiCloud is?

Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.

What is a multi-cloud strategy?

The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.

What is meant by Edge Computing, and how is it related to the cloud?

Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.

What are disadvantages of SaaS cloud computing layer

1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.

What is IaaS in Cloud Computing?

IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.

Explain what is the use of “EUCALYPTUS” in cloud computing?

EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.

Name the foremost refined and restrictive service model?

The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.

Name all the kind of virtualization that are also characteristics of cloud computing?

Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.

What Are Main Features Of Cloud Services?

Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades

What Are The Advantages Of Cloud Services?

Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.

Mention The Basic Components Of A Server Computer In Cloud Computing?

The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.

What are the advantages of auto-scaling?

Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management

[appbox googleplay]

[appbox appstore id1560083470-iphone screenshots]

Azure Cloud

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep
#Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure
windows 10/11:
web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (

Which Services Are Provided By Window Azure Operating System?

Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management

Which service in Azure is used to manage resources in Azure?

Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.

Which  web applications can be deployed with Azure?

Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.

What are Roles in Azure and why do we use them?

Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal.
There are 3 types of roles in Microsoft Azure:
● Web Role
● Worker Role
● VM Role
Let’s discuss each of these roles in detail:
Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications.
Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website.
VM Role – The VM role is used by a user to schedule tasks and other windows services.
This role can be used to customize the machines on which the web and worker role is running.

What is Azure as PaaS?

PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.

What are Break-fix issues in Microsoft Azure?

In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.

Explain Diagnostics in Windows Azure

Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in
Windows Azure as well as in the computer’s emulator for collecting data for a role instance.

State the difference between repetitive and minimal monitoring.

Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application.
On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.

What is the main difference between the repository and the powerhouse server?

The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.

Explain command task in Microsoft Azure

Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.

What is the difference between Azure Service Bus Queues and Storage Queues?

Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues.
Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services.
Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.

Explain Azure Service Fabric.

Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications.
The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.

Define the Azure Redis Cache.

Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.

How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?

TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.

What are the options to manage session state in Windows Azure?

● Windows Azure Caching
● SQL Azure
● Azure Table

What is cspack?

It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.

What is csrun?

It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.

How to design applications to handle connection failure in Windows Azure?

The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).

What is Windows Azure Diagnostics?

Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.

What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?

Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.

What is the use of Azure Active Directory?

Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network

Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?

This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.

What are virtual machine scale sets in Azure?

Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.

Are data disks supported within scale sets?

Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include:
● Azure files (SMB shared drives)
● OS drive
● Temp drive (local, not backed by Azure Storage)
● Azure data service (for example, Azure tables, Azure blobs)
● External data service (for example, remote database)

What is the difference between the Windows Azure Platform and Windows Azure?

The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.

What are the three main components of the Windows Azure Platform?

Compute, Storage and AppFabric.

Can you move a resource from one group to another?

Yes, you can. A resource can be moved among resource groups.

How many resource groups a subscription can have?

A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.

Explain the fault domain.

This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.

Differentiate between the repository and the powerhouse server?

Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.

Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

AWS Cloud

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Explain what S3 is?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”

What is AMI?

AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.

Mention what the relationship between an instance and AMI is?

From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

How many buckets can you create in AWS by default?

By default, you can create up to 100 buckets in each of your AWS accounts.

Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again

Explain what T2 instances is?

T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.

In VPC with private and public subnets, database servers should ideally be launched into which subnet?

With private and public subnets in VPC, database servers should ideally launch into private subnets.

Mention what the security best practices for Amazon EC2 are?

For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI

Is the property of broadcast or multicast supported by Amazon VPC?

No, currently Amazon VPI not provide support for broadcast or multicast.

How many Elastic IPs is allows you to create by AWS?

5 VPC Elastic IP addresses are allowed for each AWS account.

Explain default storage class in S3

The default storage class is a Standard frequently accessed.

What are the Roles in AWS?

Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.

What are the edge locations?

Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.

Explain snowball?

Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.

What is a redshift?

Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.

What is meant by subnet?

A large section of IP Address divided into chunks is known as subnets.

Can you establish a Peering connection to a VPC in a different region?

Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.

What is SQS?

Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

What is Amazon EMR?

EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.

What is boot time taken for the instance stored backed AMI?

The boot time for an Amazon instance store-backend AMI is less than 5 minutes.

Do you need an internet gateway to use peering connections?

Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.

How to connect an EBS volume to multiple instances?

We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.

What are the different types of Load Balancer in AWS services?

Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer

In which situation you will select provisioned IOPS over standard RDS storage?

You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.

What are the important features of Amazon cloud search?

Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice

What is AWS CDK?

AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
• Create and provision AWS infrastructure deployments predictably and repeatedly.
• Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling.
• Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
• Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.

What are best practices for controlling acccess to AWS CodeCommit?

– Create your own policy
– Provide temporary access credentials to access your repo
* Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod
* Federated access
* Multi-factor authentication

What is AWS CodeCobuild?

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.

AWS DevOps CodeBuild

DevOps How Does AWS CodeBuild Works

1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file.
2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build.
3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository.
4- Build artifacts produced are uploaded into an Amazon S3 bucket.
5- he build environment sends a notification about the build status.
6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.

What is AWS CodeDeploy?

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.

With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.

You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.

You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.

AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision

What is AWS CodeCommit?

AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.

What is AWS Opswork?

AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.

Chef and Puppet enable you to use code to automate your configurations.

AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.

AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.

AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep

Google Cloud Platform

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

What are the main advantages of using Google Cloud Platform?

Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
 information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform. 

Why should you opt to Google Cloud Hosting?

The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability

What are the libraries and tools for cloud storage on GCP?

At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage

There are many third party libraries and tools such as Boto Library.

What do you know about Google Compute Engine?

Google Cloud Engine is the basic component of the Google Cloud Platform. 
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.

How are the Google Compute Engine and Google App Engine related?

Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage

How does the pricing model work in GCP cloud?

While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store.
The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network. 

What are the different methods for the authentication of Google Compute Engine API?

This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API:
– Using OAuth 2.0
– Through client library
– Directly with an access token

List some Database services by GCP.

There are many Google cloud database services which helps many enterprises to manage their data.
● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud.
● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity.
● Cloud Spanner
● Cloud Bigtable
● Firestore
● Firebase Realtime Database
● Memorystore
● Google Cloud Partner Services
● For more database products you can refer Google Cloud Databases
● For more data base solutions you can refer Google cloud Database solutions

What are the different Network services by GCP?

Google Cloud provides many Networking services and technologies that make easy to scale and manage your network.
● Hybrid connectivity helps to connect your infrastructure to Google Cloud
● Virtual Private Cloud (VPC) manage networking for your resources
● Cloud DNS is a highly available global domain naming system (DNS) network.
● Service Directory provides a service-centric network solution.
● Cloud Load Balancing
● Cloud CDN
● Cloud Armor
● Cloud NAT
● Network Telemetry
● VPC Service Controls
● Network Intelligence Center
● Network Service Tiers
● For more about Networking products refer Google Cloud Networking

List some Data Analytics service by GCP.

Google Cloud offers various Data Analytics services.
● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective.
● Looker
● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud.
● Dataflow
● Pub/Sub
● Cloud Data Fusion
● Data Catalog
● Cloud Composer
● Google Data Studio
● Dataprep
● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale.
● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services.
● For Google Cloud analytics services visit Data Analytics

Explain Google BigQuery in Google Cloud Platform

For traditional data warehouse, hardware setup replacement is required. In such case, Google
BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.

Explain Auto-scaling in Google cloud computing

Without human intervention, you can mechanically provision and initiate new instances in AWS.
Depending on various metrics and load, Auto-scaling is triggered.

Describe Hypervisor in Google Cloud Platform

Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.

Define VPC in the Google cloud platform

VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep


Steve Nouri

Online Interview Questions

programming with

Don’t do a connection setup per RPC.

Cache things wherever possible.

Write asynchronous code wherever possible.

Exploit eventual consistency wherever possible. Otherwise known as, coordination is expensive so don’t do it unless you have to.

Route your requests sensibly.

Locate processing wherever will result in the best latency. That might mean you need more resources.

Use LIFO queues, they have better tail statistics than FIFO. Queue before load balancing, not after, that way a small fraction of slow requests are much less likely to stall all the processors. Source: Andrew mc Gregor

What operating system do most servers use in 2022?

Of the 1500 *NIX servers under my control (a very large fortune 500 company), 90% of them are Linux. We have a small amount of HP-UX and AIX left over running legacy applications, but they are being phased out. Most of the applications we used to run on HP-UX and AIX (SAP, Oracle, you-name-it) now run on Linux. And it’s not just my company, it’s everywhere.

In 2022, the most widely used server operating system is Linux. Source: Bill Thompson

How do you load multiple files in parallel from an Amazon S3 bucket?

By specifying a file prefix of the file names in the COPY command or specifying the list of files to load in a manifest file.

How can you manage the amount of provisioned throughput that is used when copying from an Amazon DynamoDB table?

Set the READRATIO parameter in the COPY command to a percentage of unused throughput.

What you must do to use client-side encryption with your own encryption keys when using COPY to load data files that were uploaded to Amazon S3?

You must add the master key value to the credentials string with the ENCRYPTED parameter in the COPY command.

DevOps  and SysOps Breaking News – Top Stories – Jobs

  • I might need to adapt!
    by /u/54nd15 (Sysadmin) on May 24, 2024 at 5:52 pm

    I’m a prospect to a position that’s looking to line up very well. I’ve been a multi role sysadmin working on VoIP, o365, Entra, intune, hardware and networking plus whatever else comes my way. I’ve been doing this for 17 years. The new position is one task only. One piece to a massive it department. Only working on communications. That’s it. It even permissions or access control. Literally one single app and VoIP. Idk how I’ll react to slowing down and not being fast paced 100% of the time. submitted by /u/54nd15 [link] [comments]

  • How to automate certain onboarding processes
    by /u/1234captain1234 (Sysadmin) on May 24, 2024 at 5:43 pm

    I don't know if I worded the title right. I am an IT Intern for a company and I am looking to try and improve their onboarding process. I have experience on the IT team for my school, and we pushed a lot of applications VIA Group Policy automatically. Not many, as it was normally just word, teams, VPN, stuff like that. At the company I am interning at, the onboarding process requires us to log in as the user to install certain specialty applications and whatnot. We cannot do it on our own network admin accounts. We have to give the user local admin then take it away when we are done. We do still image with SCCM and whatnot and I think some applications are pushed with that, just not many. There is a script we manually run to set the background and stuff and I know that can be automated I just do not know where to even begin. Just a bored intern looking to improve some processes! Any tips would be appreciated! submitted by /u/1234captain1234 [link] [comments]

  • How to reset local admin password
    by /u/gemma76510 (Sysadmin) on May 24, 2024 at 5:35 pm

    Got into a situation where I've no backup no snapshot of my win2012 server VM. I want to reset the local admin password. But not able to find any other way than cd iso or flash usb method. Need help submitted by /u/gemma76510 [link] [comments]

  • Microsoft Entra joined computers not registering with internal DNS servers
    by /u/vane1978 (Sysadmin) on May 24, 2024 at 5:34 pm

    What the title says. All of my Active Directory domain joined and Hybrid Joined computers are registering but NOT Entra joined computers. I can create an A record, but I would rather have it register automatically. Any suggestions? submitted by /u/vane1978 [link] [comments]

  • CKA or CKAD?
    by /u/benaffleks (Everything DevOps) on May 24, 2024 at 5:13 pm

    Hey people. Senior SRE here with several years of experience developing and managing k8s clusters. I've been studying for the CKA cert, however after thinking about it some more, CKAD is starting to make more sense to me. The continuous trend in k8s administration is through managed services like EKS and GKE. Rarely do I see job openings of bare metal or administrating a cluster through other means. While yes there are still cases where you need to debug the control plane, it's extremely rare and 99% of the time, the associated cloud provider has support. And yes, it's extremely important to know how the control plane component works. But, I find that my personal experience with K8s is more in the development and application architectural side, than it is in debugging etcd or the api server. Does anyone else feel the same? Or is CKA still the recommended cert to get? submitted by /u/benaffleks [link] [comments]

  • Scenarios where a NGFW / UTM is worth it
    by /u/jfernandezr76 (Sysadmin) on May 24, 2024 at 5:13 pm

    A digital agency I'm consulting for was recently hit by a raw DDoS attack, losing all access through VPN for remote users and impacting the bandwith, slowing down the connection to unsustainable levels. After that incident passed, the IT company which manages their infrastructure suggested substituting the existing Ubiquiti EdgeRouter Pro for a SonicWall TZ470. First, I don't think that a firewall could have mitigated a raw DDoS flooding the connection, although I don't know if a NGFW/UTM can identify and discard those packets and resist the attack (I guess the most effective way is at ISP level, but I really am not an expert). The agency has installed antivirus software on all workstations. The EdgeRouter firewall is blocking any incoming access but for the usual supervised ports (HTTP/S and SSH to development servers, no other services are hosted in-house, all is Microsoft 365), and NAT is used for all equipment behind the router. As a digital agency, we cannot configure blocking any traffic by source country by default, nor limit access to social networks or other ad-intense sites. Traffic could be shaped, but it's not rare that a big some GB file has to be sent at 10pm to a third party AV editor. Maybe the SonicWall could block known malicious sites or inspect the traffic to look for malicious patterns, I'm just not sure how effective it is, but I don't really see how it could benefit the agency. My concern is that the SonicWall proposal the IT company sent is ambiguous, does not indicate clearly what functionalities would provide benefits to the agency and how they will configure and mantain it. What are the scenarios where you blindly reccommend a NGFW/UTM, and which ones a simple router with firewall and NATtting would be enough? Is it worth to pay 3K/yr to get the additional security the SonicWall could provide? Thanks! submitted by /u/jfernandezr76 [link] [comments]

  • Intune/SCCM in 2024?
    by /u/TrueMythos (Sysadmin) on May 24, 2024 at 5:04 pm

    I manage well over 1000 Windows workstations with KACE. I'm very new to the world of endpoint management and admit I don't know everything that's out there, but I have very few complaints about KACE as a whole. Upper management wants to use Intune and/or SCCM going forward if I can't articulate a reason why those products offer less functionality. So far, I've set up Intune for our remote employees, and I've been less than thrilled. Packages sometimes take days to deploy, I can't run scripts on demand, hunting down error logs and trying to make sense of them leads me down rabbit hole after rabbit hole, and many of the CSPs we want to use don't even work on Windows 11. I'm now using Intune to deploy the KACE agent and letting KACE install the right applications. I'm really uncomfortable about using OEM images, but this is where we're at right now. My next step is to evaluate SCCM, but it seems hugely complicated and involves lots of moving domain pieces I don't have access to. I've been reading documentation and watching tutorial videos, but it doesn't seem like something I can test in a lab by myself. My biggest concern is that if we migrate and something goes wrong, I can't just reach out to the vendor like I can with KACE. I'd be the only one who understands this new system, and if I configure something poorly now (which I will, having very little sysadmin experience), I'm afraid I'll create a nightmare down the road. We have over 100 application packages and/or scripts, so it's not like I can just set up patch management and policies and call it a day. Those of you who work with Intune/SCCM, what do you do when you need to run a quick script on a small group of devices, manage something that a CSP doesn't cover, or deploy large packages en masse? What do you do when something goes terribly wrong and you personally don't have the knowledge to know where to start? Those of you who use Autopilot, how do you feel about OEM images, and have you ever had a change in the OEM image break something you didn't know about before lots of computers were shipped off? If it turns out that Intune and SCCM are the best thing since sliced bread, I'm happy to be proven wrong! submitted by /u/TrueMythos [link] [comments]

  • What installed "Microsoft XML Parser and Microsoft XML Core Services (MSXML) 4.0"? How to remove (EOL)?
    by /u/jwckauman (Sysadmin) on May 24, 2024 at 4:53 pm

    Is there a way to determine why a component is installed on a Windows device? MSXML 4.0 is present on one of our servers but it's not listed in Installed Programs and Features. It must have been installed by another application that I think is long gone. I wondered if components had an audit trail somewhere that showed when/why they were installed. Secondly, how do you remove a component like MSXML 4.0 when it doesn't have its own installer (at least that I can detect)? Since MSXML 4.0 is past EOL, we'd like to remove it from that server just in case there are unpublished vulnerabilities out there. EOL/Obsolete Software: Microsoft XML Parser and Microsoft XML Core Services (MSXML) 4.0 Detected submitted by /u/jwckauman [link] [comments]

  • Does canary deployments save your ass?
    by /u/DrMerkwuerdigliebe_ (Everything DevOps) on May 24, 2024 at 4:32 pm

    I have never been in an infrastructure with canary deployments. So I would like to know how well they work for you. How often do you acturally rollback a canary? How difficult is it for you to design services that needs to work in the mixed state? What situations have you experienced where canary deployments acturally saved your ass? submitted by /u/DrMerkwuerdigliebe_ [link] [comments]

  • Advice on cloud infrastructure setup (multiple single servers vs cluster of servers containerized)
    by /u/m-ago (Everything DevOps) on May 24, 2024 at 4:14 pm

    I am building a few apps currently and want to deploy each app soon to production, still thinking on how to setup the infra. Most of the apps will be a node web api and a database (mongodb / postgres) for multiple clients or my own apps. I am a one man army and will invest everything myself so I am not on a huge budget currentlt. First idea is: Setting up a small server for the app instance and also a seperate small server for the database. This will be upgraded when needed. With this I dont have multiple database nodes for any failure fallback / replication strategy. And also not multiple servers for the app to have a proper load balancing + zero downtime strategy. Apps will be installed bare metal so that gains probably some performance over containerization (using nix, so everything will be declarative and reproducible). Second idea: Setting up a few big servers (more when needed) in a cluster and deploying every app and database instance in a container. This way I could have multiple container instances to allow replication and zero downtime and have them on seperated servers for fault tolerance. I will actively monitor and add more servers to the cluster or increase the servers when needed. I do value a good infrastructure to also handle zero downtime and have some database replication happening for fault tolerance. So i was thinking with the first approach it will become a huge pay check when I have multiple apps deployed because each app needs at least 2 servers and maybe more if i want to have replication and other good stuff. But I am a beginner so no idea if it actually going to be more expensive.. With the second approach I am curious whether it is recommended to have such an infrastructure for production apps and if it becomes a bottleneck to monitor all the containers on multiple servers... Any advices or suggestions are much appreciated! submitted by /u/m-ago [link] [comments]

  • How Do You Present Projects?
    by /u/doggpants (Sysadmin) on May 24, 2024 at 4:02 pm

    I’m currently in a weird limbo of being promoted to a cloud sys admin/ security role from my help desk position (been pretty much doing both jobs the past 6 months with a carrot dangled in front of me for a promotion). I’ve had to present a few projects to my team (VP who approves isn’t technical) and other non technical teams so I’ve put PowerPoints together with agendas, outlines and graphics to keep myself and others on track. I worked in special ed from 14-25 and have a degree in it so I like to utilize a lot of the teaching materials I learned then when presenting a topic. I had a meeting with my team about an issue I found a solution for that could kickstart a pretty big project I’d be spearheading. Our senior sys admin shot some shade about me doing a PowerPoint for when it comes time to present it. Is there another way people usually go around this? Am I being dated putting together a PowerPoint for this? This is my first long term job since teaching and I’m not usually invited to any meetings since I’m technically a level 1 help desk employee. submitted by /u/doggpants [link] [comments]

  • Sales Emails..
    by /u/floater293 (Everything DevOps) on May 24, 2024 at 4:02 pm

    Other than scoping our LinkedIn Profiles, as I’ve noticed some of these emails have a hint of personalization. How the hell are these folks getting our corporate emails? What is the best way to deal with these folks? I’m not in a position to talk about products, nor do i care to have these discussion. Frankly very annoying. They do have a “Click here if you wish for me to stop sending you emails” - but not sure if that’s just falling into their trap. submitted by /u/floater293 [link] [comments]

  • Linux: only search new content in file
    by /u/mro21 (Sysadmin) on May 24, 2024 at 3:51 pm

    Mainly I am wondering if there is a simple ready-to-use tool to use in a shell that I'm not aware of. Maybe also I am defining the wrong problem and there is a simple tool to solve the correct problem. However I want to keep it KISS for now and work with the plain files, i.e. not inject it in some log analysis tool and alert from there. Say I have a logfile that is rotated once per day (sidenote: it's not clear when it is rotated as that may depend on the system startup time). Every hour I want to search the entries starting at the last entry present at the previous search. Of course we should also be able to detect files that were completely rotated since the last search, and then restart the search from the beginning. We can agree that the files are created by syslog and each line contains a timestamp like "2024-02-15T15:22:07.983371+01:00" UPDATE I admit I had a lazy moment LOL, here is the poor man's solution. Run it after the beginning of every hour. lasthour=$(date +%Y-%m-%dT%H --date="1 hour ago") for i in $(ls -1rt /var/log/messages* | tail -2); do cmd=grep if [[ $i =~ .xz$ ]]; then cmd=xzgrep fi $cmd -e ^$lasthour $i | grep -e "whateverinterestsyou" -e "whateverinterestsyou2" done submitted by /u/mro21 [link] [comments]

  • How do you guys handle paranoid test-taking software?
    by /u/CeC-P (Sysadmin) on May 24, 2024 at 3:40 pm

    This has been the bane of my existence for over a year now. We do some sort of certifications or something in one division and they commonly have to run OnVue or one other test-taking software. It doesn't request an admin token via UAC correctly. One of them flags Sophos Endpoint and says that has to go. UM NO. Not gonna happen. The other made them manually shut down Teams and our remote control software. Reasonable. It also allowed us to just suspend the service and that was good enough. The other wanted it removed. Now, even running as admin, it won't pick up the webcam. I can't diagnose it because we killed all the remote controls. Managed to fool it, actually, so there's something to ponder and Camera can't access the webcam to test its functionality because OnVue is currently holding it open while complaining that it can't use the webcam. So we overrode AV settings and firewall. Can't ask if it worked because we killed Teams. Do you guys deal with this crap? And if so, how? I'm not sure it will detect being run in a VM but my guess is yes. We might just take a retired laptop and mail that around to the test takers with zero antivirus, zero domain join, but they need the link from their email, which is in Outlook. We think we can get it from outlook on the web? Anyone else got creative solutions to this menace to IT? submitted by /u/CeC-P [link] [comments]

  • Strange Replication Issue
    by /u/chevy696969 (Sysadmin) on May 24, 2024 at 3:35 pm

    We are running 3 AD servers, AD02 and AD03 are replicating to SYSVOL\sysvol and all is good, I found AD01 is replicating with errors to SYSVOL\Domain, does anyone know where I can direct AD01 to use SYSvol\sysvol and not its current setup? It is our primary DC and runs DNS and the way things are setup demoting is not really an option. TIA submitted by /u/chevy696969 [link] [comments]

  • Pagerduty vs. OpsGenie vs. VictorOps vs. Grafana ... what's the deal?
    by /u/kruschev_ops (Everything DevOps) on May 24, 2024 at 3:06 pm

    Hey everyone. I work for a large company that is a household name, but we are not really a tech company and our stack is woefully outdated. We're finally getting the budget to modernize our ops and devops now in the wake of the covid disruption. Like everyone else, we already use atlassian. We happen to use splunk as well. We're in an RFP between pagerduty, victorops (who I believe is the splunk one) and opsgenie (who is the atlassian one). We are also looking at grafana. My main question is ... is a stand-alone incident response platform even justified these days? Seems like the good enough offerings from splunk and atlassian are the future and xmatters and pagerduty are kinda going to lose relevance over time. Anyone else feel this way or are you seeing something that I'm missing? Why should we even consider pagerduty when it seems the others are closing the gap. I get that pagerduty seems to be the most fully featured, but we're a large org and don't need half the stuff they're offering. Like killing a fly with a bazooka it seems. Is there some critical feature or functionality pagerduty has that victorops/opsgenie/grafana don't have that are causing orgs to continue using pagerduty instead of consolidating into atlassian or splunk? Cheers! View Poll submitted by /u/kruschev_ops [link] [comments]

  • Active Directory Users and Computers: ADUC pronunciation
    by /u/kelvinator300 (Sysadmin) on May 24, 2024 at 3:04 pm

    When I was first being introduced to AD and ADUC in very early 2000s, my mentors pronounced it as 'A Duke' so that's how I've always pronounced it. Honestly, it sounds so much better to me. When I hear 'A Duck', I'm reminded of a vulgar expression I used to hear a lot in the 80s and 90s..."well, f**k a duck!" Also, I'm tempted to make quacking noises. It has come to my attention that most people probably say 'A Duck' but I'm wondering...Am I the only one that says 'A Duke'? submitted by /u/kelvinator300 [link] [comments]

  • Certificates - old devices let’s encrypt. June 7th
    by /u/povlhp (Sysadmin) on May 24, 2024 at 3:03 pm

    Just a head up - on June 7th Let’s Encrypt will switch fully to their own root, and use new intermediates which are not cross signed by DST. This will break Android 4.4. GlobalSign seems to have change to a new root recently - giving issues up to Android 9. But you can insert a cross-signed sub-CA and get the clients to verify against the old root. But this is a manual process. We have <200 Android 4.4 that were supposed to have been upgraded. Thus we need to make sure not to use LetsEncrypt on sites they need access to - unless we decide they are broken to force replacement. GlobalSign is another issue. We have 1000+ special purpose Android devices lower than v10. Here we need to be careful when replacing certain. We could switch to digicert (root expires 2031) or Comodo (root expires in 2029 like old GlobalSign) but we don’t know when they will do the breaking change. So if you have older pre Android 14 devices (first version where system root CAs are installable) be aware of root expiration and root switching by certificate authorities. submitted by /u/povlhp [link] [comments]

  • Christophe Limpalair Cheat Sheet on Excessive Priv + Zombie Cloud Identities
    by /u/SonraiSecurity (Everything DevOps) on May 24, 2024 at 2:34 pm

    Nice vid explanation in undr 180 secs. submitted by /u/SonraiSecurity [link] [comments]

  • Asking the masses - Office Desk Phone Systems
    by /u/Wah_Day (Sysadmin) on May 24, 2024 at 2:30 pm

    I am in need of some help. I have been tasked with looking for alternatives to our Mitel system as we will no longer be able to buy new licenses for our version at the end of the year. I already plan to get a quote to see if we can keep our current system, just upgrade it, but my boss wants me to find an alternative solutions. So I figured I would ask the masses what are some good ones to look into, but more specifically, what are some phone systems to AVOID. submitted by /u/Wah_Day [link] [comments]

Top 20 AWS Certified Associate SysOps Administrator Practice Quiz – Questions and Answers Dumps

AWS Certified Security – Specialty Questions and Answers Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Top 20 AWS Certified Associate SysOps Administrator Practice Quiz – Questions and Answers Dumps

What is the AWS Certified SysOps Administrator – Associate?

The AWS Certified SysOps Administrator – Associate (SOA-C01) examination is intended for individuals who have technical expertise in deployment, management, and operations on AWS.

The AWS Certified SysOps Administrator – Associate exam covers the following domains:

Domain 1: Monitoring and Reporting 22%

Domain 2: High Availability 8%

Domain 3: Deployment and Provisioning 14%

Domain 4: Storage and Data Management 12%

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Domain 5: Security and Compliance 18%

Domain 6: Networking 14%

Domain 7: Automation and Optimization 12%

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

AWS Certified SysOps Administrator
AWS Certified SysOps Administrator

Top 200 Top 20 AWS Certified Associate SysOps Administrator  Practice Quiz Questions and Answers and References – SOA-C01:

Download Full PDF here

AWS Certified SysOps Administrator – Associate Study guide and Practice Exam
AWS Certified SysOps Administrator – Associate Study guide and Practice Exam

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Question 1: Under which security model does AWS provide secure infrastructure and services, while the customer is responsible for secure operating systems, platforms, and data?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT1: The Shared Responsibility Model is the security model under which AWS provides secure infrastructure and services, while the customer is responsible for secure operating systems, platforms, and data.

Question 2: Which type of testing method is used to compare a control system to a test system, with the goal of assessing whether changes applied to the test system improve a particular metric compared to the control system?

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT2: The side-by-side testing method is used to compare a control system to a test system, with the goal of assessing whether changes applied to the test system improve a particular metric compared to the control system.

Reference2: AWS Side by side testing 

Question 3: When BGP is used with a hardware VPN, the IPSec and the BGP connections must both be which of the following on the same user gateway device?



Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT3: The IPSec and the BGP connections must both be terminated on the same user gateway device.

Reference3: IpSec and BGP in AWS

Question 4: Which pillar of the AWS Well-Architected Framework includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT4: Security is the pillar of the AWS Well-Architected Framework that includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reference4: AWS Well-Architected Framework: Security

Question 5: Within the realm of Amazon S3 backups, snapshots are which of the following?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT: Within the realm of Amazon S3 backups, snapshots are block-based.

Reference5: Snapshots are block based

Question 6: Amazon VPC provides the option of creating a hardware VPN connection between remote customer networks and their Amazon VPC over the Internet using which encryption technology?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT6: Amazon VPC provides the option of creating a hardware VPN connection between remote customer networks and their Amazon VPC over the Internet using IPsec encryption technology.

Reference6: Amazon VPC IPSec Encryption

Question 7: To make a clean backup of a database, that database should be put into what mode before making a snapshot of it?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT7: To make a clean backup of a database, that database should be put into hot backup mode before making a snapshot of it.

Reference: AWS Prescriptive Backup Recovery Guide

Question 8: Which pillar of the AWS Well-Architected Framework includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT8: Performance efficiency is the pillar of the AWS Well-Architected Framework that includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Reference8: Performance Efficiency Pillar – AWS Well-Architected Framework

Question 9: AWS Storage Gateway supports which three configurations?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT9: AWS Storage Gateway supports Gateway-stored volumes, Gateway-cached volumes, and Gateway-virtual tape library.

Reference9: AWS Storage Gateway configurations

Question 10: With which of the following can you establish private connectivity between AWS and a data center, office, or co-location environment?



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT10: With AWS Direct Connect you can establish private connectivity between AWS and a data center, office, or co-location environment.

Reference: AWS Direct Connect

Question 11: A company is migrating a legacy web application from a single server to multiple Amazon EC2 instances behind an Application Load Balancer (ALB). After the migration, users report that they are frequently losing their sessions and are being prompted to log in again. Which action should be taken to resolve the issue reported by users?

A) Purchase Reserved Instances.
B) Submit a request for a Spot block.
C) Submit a request for all Spot Instances.
D) Use a mixture of On-Demand and Spot Instances



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT11: Legacy applications designed to run on a single server frequently store session data locally. When these applications are deployed on multiple instances behind a load balancer, user requests are routed to instances using the round robin routing algorithm. Session data stored on one instance would not be present on the others. By enabling sticky sessions, cookies are used to track user requests and keep subsequent requests going to the same instance.

Reference 11: Sticky Sessions

Question 12: An ecommerce company wants to lower costs on its nightly jobs that aggregate the current day’s sales and store the results in Amazon S3. The jobs run on multiple On-Demand Instances, and the jobs take just under 2 hours to complete. The jobs can run at any time during the night. If the job fails for any reason, it needs to be started from the beginning. Which solution is the MOST cost-effective based on these requirements?

A) Purchase Reserved Instances.

B) Submit a request for a Spot block.

C) Submit a request for all Spot Instances.

D) Use a mixture of On-Demand and Spot Instances.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT12: The solution will take advantage of Spot pricing, but by using a Spot block instead of Spot Instances, the company can be assured the job will not be interrupted.

Reference12: Spot Block

Question 13: A sysops team checks their AWS Personal Health Dashboard every week for upcoming AWS hardware maintenance events. Recently, a team member was on vacation and the team missed an event, which resulted in an outage. The team wants a simple method to ensure that everyone is aware of upcoming events without depending on an individual team member checking the dashboard. What should be done to address this?

A) Build a web scraper to monitor the Personal Health Dashboard. When new health events are detected, send a notification to an Amazon SNS topic monitored by the entire team.

B) Create an Amazon CloudWatch Events event based off the AWS Health service and send a notification to an Amazon SNS topic monitored by the entire team.

C) Create an Amazon CloudWatch Events event that sends a notification to an Amazon SNS topic monitored by the entire team to remind the team to view the maintenance events on the Personal Health Dashboard.

D) Create an AWS Lambda function that continuously pings all EC2 instances to confirm their health. Alert the team if this check fails.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT13: The AWS Health service publishes Amazon CloudWatch Events. CloudWatch Events can trigger Amazon SNS notifications. This method requires neither additional coding nor infrastructure. It automatically notifies the team of upcoming events, and does not depend upon brittle solutions like web scraping.

Reference 13: Amazon CloudWatch Events

Question14: An application running in a VPC needs to access instances owned by a different account and running in a VPC in a different AWS Region. For compliance purposes, the traffic must not traverse the public internet.
How should a sysops administrator configure network routing to meet these requirements?

A) Within each account, create a custom routing table containing routes that point to the other account’s virtual private gateway.

B) Within each account, set up a NAT gateway in a public subnet in its respective VPC. Then, using the public IP address from the NAT gateway, enable routing between the two VPCs.

C) From one account, configure a Site-to-Site VPN connection between the VPCs. Within each account, add routes in the VPC route tables that point to the CIDR block of the remote VPC.

D) From one account, create a VPC peering request. After an administrator from the other account accepts the request, add routes in the route tables for each VPC that point to the CIDR block of the peered VPC.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT14: A VPC peering connection enables routing using each VPC’s private IP addresses as if they were in the same network. Traffic using inter-Region VPC peering always stays on the global AWS backbone and never traverses the public internet.

Reference14: VPC Peering

Question15: An application running on Amazon EC2 instances needs to access data stored in an Amazon DynamoDB table.

Which solution will grant the application access to the table in the MOST secure manner?

A) Create an IAM group for the application and attach a permissions policy with the necessary privileges. Add the EC2 instances to the IAM group.

B) Create an IAM resource policy for the DynamoDB table that grants the necessary permissions to Amazon EC2.

C) Create an IAM role with the necessary privileges to access the DynamoDB table. Associate the role with the EC2 instances.

D) Create an IAM user for the application and attach a permissions policy with the necessary privileges. Generate an access key and embed the key in the application code.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT15: An IAM role can be used to provide permissions for applications that are running on Amazon EC2 instances
to make AWS API requests using temporary credentials.

Reference15: IAM Role

Question16: A third-party service uploads objects to Amazon S3 every night. Occasionally, the service uploads an incorrectly formatted version of an object. In these cases, the sysops administrator needs to recover an older version of the object.
What is the MOST efficient way to recover the object without having to retrieve it from the remote service?

A) Configure an Amazon CloudWatch Events scheduled event that triggers an AWS Lambda function that backs up the S3 bucket prior to the nightly job. When bad objects are discovered, restore the backed up version.

B) Create an S3 event on object creation that copies the object to an Amazon Elasticsearch Service (Amazon ES) cluster. When bad objects are discovered, retrieve the previous version from Amazon ES.

C) Create an AWS Lambda function that copies the object to an S3 bucket owned by a different account. Trigger the function when new objects are created in Amazon S3. When bad objects are discovered, retrieve the previous version from the other account.

D) Enable versioning on the S3 bucket. When bad objects are discovered, access previous versions with the AWS CLI or AWS Management Console.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT16: Enabling versioning is a simple solution; (A) involves writing custom code, (C) has no versioning, so the replication will overwrite the old version with the bad version if the error is not discovered quickly, and (B) will involve expensive storage that is not well suited for objects.

Reference16: Versioning

Question17: According to the AWS shared responsibility model, for which of the following Amazon EC2 activities is AWS responsible? (Select TWO.)
A) Configuring network ACLs
B) Maintaining network infrastructure
C) Monitoring memory utilization
D) Patching the guest operating system
E) Patching the hypervisor


D and E

Get mobile friendly version of the quiz @ the App Store

NOTES/HINT17: AWS provides security of the cloud, including maintenance of the hardware and hypervisor software supporting Amazon EC2. Customers are responsible for any maintenance or monitoring within an EC2 instance, and for configuring their VPC infrastructure.

Reference17: Security of the cloud

Question18: A security and compliance team requires that all Amazon EC2 workloads use approved Amazon Machine Images (AMIs). A sysops administrator must implement a process to find EC2 instances launched from unapproved AMIs.

Which solution will meet these requirements?
A) Create a custom report using AWS Systems Manager inventory to identify unapproved AMIs.
B) Run Amazon Inspector on each EC2 instance and flag the instance if it is using unapproved AMIs.
C) Use an AWS Config rule to identify unapproved AMIs.
D) Use AWS Trusted Advisor to identify the EC2 workloads using unapproved AMIs.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT18: AWS Config has a managed rule that handles this scenario.

Reference18: Managed Rule

Question19: A sysops administrator observes a large number of rogue HTTP requests on an Application Load Balancer. The requests originate from various IP addresses. These requests cause increased server load and costs.

What should the administrator do to block this traffic?
A) Install Amazon Inspector on Amazon EC2 instances to block the traffic.
B) Use Amazon GuardDuty to protect the web servers from bots and scrapers.
C) Use AWS Lambda to analyze the web server logs, detect bot traffic, and block the IP addresses in the security groups.
D) Use an AWS WAF rate-based rule to block the traffic when it exceeds a threshold.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT19: AWS WAF has rules that can protect web applications from HTTP flood attacks.

Reference19: HTTP Flood

Question20: A sysops administrator is implementing security group policies for a web application running on AWS.

An Elastic Load Balancer connects to a fleet of Amazon EC2 instances that connect to an Amazon RDS database over port 1521. The security groups are named elbSG, ec2SG, and rdsSG, respectively.
How should these security groups be implemented?
A) elbSG: allow port 80 and 443 from;
ec2SG: allow port 443 from elbSG;
rdsSG: allow port 1521 from ec2SG.

B) elbSG: allow port 80 and 443 from;
ec2SG: allow port 80 and 443 from elbSG and rdsSG;
rdsSG: allow port 1521 from ec2SG.

C) elbSG: allow port 80 and 443 from ec2SG;
ec2SG: allow port 80 and 443 from elbSG and rdsSG;
rdsSG: allow port 1521 from ec2SG.

D) elbSG: allow port 80 and 443 from ec2SG;
ec2SG: allow port 443 from elbSG;
rdsSG: allow port 1521 from elbSG.



Get mobile friendly version of the quiz @ the App Store

NOTES/HINT20: elbSG must allow all web traffic (HTTP and HTTPS) from the internet. ec2SG must allow traffic from the load balancer only, in this case identified as traffic from elbSG. The database must allow traffic from the EC2 instances only, in this case identified as traffic from ec2SG.

Reference20: Allow all traffic

Question21: You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access from the offending IP address block be denied tor the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from
the specified IP address block.

A) Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block
B) Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block
C) Add a rule to all of the VPC 5 Security Groups to deny access from the IP address block
D) Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block



NOTES22: Add a rule to all of the VPC 5 Security Groups to deny access from the IP address bloc

Reference22: VPC

Question 22: When preparing for a compliance assessment of your system built inside of AWS. what are three best-practices for you to prepare for an audit? Choose 3 answers

A) Gather evidence of your IT operational controls
B) Request and obtain applicable third-party audited AWS compliance reports and certifications
C) Request and obtain a compliance and security tour of an AWS data center for a pre-assessment security review
D) Request and obtain approval from AWS to perform relevant network scans and in-depth penetration tests of your system’s Instances and endpoint
E) Schedule meetings with AWS’s third-party auditors to provide evidence of AWS compliance that maps to your control objectives


B, D, E

NOTES22: AWS Security

Reference22: AWS Audit Manager

Question23: You have started a new job and are reviewing your company’s infrastructure on AWS You notice one web application where they have an Elastic Load Balancer (&B) in front of web instances in an Auto Scaling Group When you check the metrics for the ELB in CloudWatch you see four healthy instances In Availability Zone (AZ) A and zero in AZ B There are zero unhealthy instances.
What do you need to fix to balance the instances across AZs?

A) Set the ELB to only be attached to another AZ
B) Make sure Auto Scaling is configured to launch in both AZs
C) Make sure your AMI is available in both AZs
D) Make sure the maximum size of the Auto Scaling Group is greater than 4




Reference23: AZs

Question24: You have been asked to leverage Amazon VPC BC2 and SOS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure your application has sufficient bandwidth between your EC2 instances and SQS.
Which option will provide (he most scalable solution for communicating between the application and SOS?

A) Ensure the application instances are properly configured with an Elastic Load Balancer
B) Ensure the application instances are launched in private subnets with the EBS-optimized option enabled
C) Ensure the application instances are launched in public subnets with the associate-publicIP-address=true option enabled
D) Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SOS queue size




Reference24: SQS

Question25: You have identified network throughput as a bottleneck on your ml small EC2 instance when uploading data Into Amazon S3 In the same region. How do you remedy this situation?

A) Add an additional ENI
B) Change to a larger Instance
C) Use DirectConnect between EC2 and S3
D) Use EBS PIOPS on the local volume



NOTES25: EC2 instances

Reference25: EC2 Best Practices

Question 26: When attached to an Amazon VPC which two components provide connectivity with external networks? Choose 2 answers

A) Elastic IPS (EIP)
B) NAT Gateway (NAT)
C) Internet Gateway {IGW)
D) Virtual Private Gateway (VGW)


C. D.

Reference26: IGW – VGW

Question 27: Your application currently leverages AWS Auto Scaling to grow and shrink as load Increases’ decreases and has been performing well Your marketing team expects a steady ramp up in traffic to follow an upcoming campaign that will result in a 20x growth in traffic over 4 weeks Your forecast for the approximate number of Amazon EC2 instances necessary to meet the peak demand is 175. What should you do to avoid potential service disruptions during the ramp up in traffic?

A) Ensure that you have pre-allocated 175 Elastic IP addresses so that each server will be able to obtain one as it launches
B) Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits
C) Change your Auto Scaling configuration to set a desired capacity of 175 prior to the launch of the marketing campaign
D) Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign


NOTES: Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign
Reference: AWS Auto Scaling

Question 28: You have an Auto Scaling group associated with an Elastic Load Balancer (ELB). You have noticed that instances launched via the Auto Scaling group are being marked unhealthy due to an ELB health check, but these unhealthy instances are not being terminated. What do you need to do to ensure trial instances marked unhealthy by the ELB will be terminated and replaced?

A) Change the thresholds set on the Auto Scaling group health check
B) Add an Elastic Load Balancing health check to your Auto Scaling group
C) Increase the value for the Health check interval set on the Elastic Load Balancer
D) Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks


NOTES: Add an Elastic Load Balancing Health Check to your Auto Scaling GroupBy default, an Auto Scaling group periodically reviews the results of EC2 instance status to determine the health state of each instance. However, if you have associated your Auto Scaling group with an Elastic Load Balancing load balancer, you can choose to use the Elastic Load Balancing health check. In this case, Auto Scaling determines the health status of your instances by checking the results of both the EC2 instance status check and the Elastic Load Balancing instance health check.
Reference:  AWS ELB

Question 29: Which two AWS services provide out-of-the-box user configurable automatic backup-as-a-service and backup rotation options? Choose 2 answers

A) Amazon S3
B) Amazon RDS
C) Amazon EBS
D) Amazon Redshift

C. D.

NOTES: EBS and Redshift
Reference: EBS and Redshift
ReferenceUrl: EBS and Redshift

Question 30: An organization has configured a VPC with an Internet Gateway (IGW). pairs of public and private subnets (each with one subnet per Availability Zone), and an Elastic Load Balancer (ELB) configured to use the public subnets The application s web tier leverages the ELB. Auto Scaling and a mum-AZ RDS database instance The organization would like to eliminate any potential single points of failure in this design. What step should you take to achieve this organization’s objective?

A) Nothing, there are no single points of failure in this architecture.
B) Create and attach a second IGW to provide redundant internet connectivity.
C) Create and configure a second Elastic Load Balancer to provide a redundant load balancer.
D) Create a second multi-AZ RDS instance in another Availability Zone and configure replication to provide a redundant database.



NOTES: Create and configure a second Elastic Load Balancer to provide a redundant load balancer.

Reference: ELB

Question 31: Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers

A) Each subnet maps to a single Availability Zone
B) A CIDR block mask of /25 is the smallest range supported
C) Instances in a private subnet can communicate with the internet only if they have an Elastic IP.
D) By default, all subnets can route between each other, whether they are private or public
E) V Each subnet spans at least 2 Availability zones to provide a high-availability environment

C. E.

Reference: VPC

Question 32: You are creating an Auto Scaling group whose Instances need to insert a custom metric into CloudWatch. Which method would be the best way to authenticate your CloudWatch PUT request?

A) Create an IAM role with the Put MetricData permission and modify the Auto Scaling launch configuration to launch instances in that role
B) Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the userscredentials into the instance User Data
C) Modify the appropriate Cloud Watch metric policies to allow the Put MetricData permission to instances from the Auto Scaling group
D) Create an IAM user with the PutMetricData permission and put the credentials in a private repository and have applications on the server pull the credentials as needed



NOTES: Create an IAM user with the PutMetricData permission and modify the Auto Scaling launch configuration to inject the userscredentials into the instance User Data

Reference: IAM

Question 33: When an EC2 instance that is backed by an S3-based AMI Is terminated, what happens to the data on me root volume?

A) Data is automatically saved as an E8S volume.
B) Data is automatically saved as an ESS snapshot.
C) Data is automatically deleted.
D) Data is unavailable until the instance is restarted.


NOTES: Data is unavailable until the instance is restarted.
Reference: AWS EC2
ReferenceUrl: AWS EC2 S3-based AMI

Question 34: You have a web application leveraging an Elastic Load Balancer (ELB) In front of the web servers deployed using an Auto Scaling Group Your database is running on Relational Database Service (RDS) The application serves out technical articles and responses to them in general there are more views of an article than there are responses to the article. On occasion, an article on the site becomes extremely popular resulting in significant traffic Increases that causes the site to go down. What could you do to help alleviate the pressure on the infrastructure while maintaining availability during these events? Choose 3 answers

A) Leverage CloudFront for the delivery of the articles.
B) Add RDS read-replicas for the read traffic going to your relational database
C) Leverage ElastiCache for caching the most frequently used data.
D) Use SOS to queue up the requests for the technical posts and deliver them out of the queue.
E) Use Route53 health checks to fail over to an S3 bucket for an error page.


A. C. E

NOTES: Leverage CloudFront, ElastiCache, Route53
Reference: CloudFront, ElastiCache, Route53

Question 35: The majority of your Infrastructure is on premises and you have a small footprint on AWS Your company has decided to roll out a new application that is heavily dependent on low latency connectivity to LOAP for authentication Your security policy requires minimal changes to the company’s existing application user management processes. What option would you implement to successfully launch this application1?

A) Create a second, independent LOAP server in AWS for your application to use for authentication
B) Establish a VPN connection so your applications can authenticate against your existing on-premises LDAP servers
C) Establish a VPN connection between your data center and AWS create a LDAP replica on AWS and configure your application to use the LDAP replica for authentication
D) Create a second LDAP domain on AWS establish a VPN connection to establish a trust relationship between your new and existing domains and use the new domain for authentication



NOTES: Trust Relationship
Reference: Trust Relationship


Djamga DevOps  Youtube Channel:

Prepare for Your AWS Certification Exam

2- GoCertify




This is a common topic that has been asked multiple times.


Sysadmin Utilities



Microsoft / Windows Server


MacOS (formerly OSX) and Apple iOS

Google ChromeOS

Backup and Storage



  • Because your network and infrastructure can’t be a black box

Business and Standards Compliance

Major Vulnerabilities




I was initially nervous about this exam compared to SAA-C02, due to the practical labs. However, they turned out to be really easy with lots of time to fumble about, delete & recreate resources.

My labs:

Create S3 buckets, set access logs, set default encryption with KMS and create a bunch of lifecycle policies

Create a VPC with public/private subnets, create SGs, create & send flow logs to an S3 bucket.

Connect Lambda to a VPC, use RDS proxy to connect to an RDS Database. Select correct execution role for the Lambda.

Exam lab experience

I did not have any negative experiences with the lab environment (I heard a lot of horror stories), however I did take the exam at a testing center.

When you register for your SOA-C02, you gain access (via Pearson VUE E-mail) to a free sample exam lab at Login – OneLearn Training Management System – Skillable – this is the exact same testing environment you will have during the actual exam. I highly recommend you do this, especially if you’re doing the exam from home – any issues you have with the testing environment like laggy interface, copy/paste issues, etc you’ll probably also have during the exam.

Study resources

My study resources were:

Adrian Cantrill’s course

Jon Bonso’s (TutorialDojo) Practice Exams

uacantril’s courses are the best, most high quality courses I’ve ever taken for any subject.

Since I’ve done the SAA-C02 course before doing the SOA-C02 course, I was able to easily skip the shared lessons & demos (there heavy overlap between these two exams) and focus on the SOA-C02 specific topics.

uTutorials_Dojo’s practice exams are 10/10 as preparation material. They were a bit more tricky (in a ‘gotcha’ kind of way) compared to the exam questions, but they were very close to the real thing.

Study methodology

My study plan was as follows:

Study Time: 7:00-9:00 (morning) Mon-Fri, which included:

Going through Adrian’s course

Detailed notes in markdown

Doing potential exam labs in AWS console

Reading AWS official documentation (in case something is not clear)

Review Notes regularly (once course material finished)

Practice Exams

Doing exams in review mode

Delving deeper into topics I was lacking in

This was the plan, but I turned out to be somewhat inconsistent, taking the exam 3 months later than planned due to being a new father and not focusing on just one thing (also did some Python learning during the same period). But, still a pass!

Source: r/AWSCertification

Top 20 AWS Certified Associate SysOps Administrator Practice Quiz - Questions and Answers Dumps

Latest DevOps and SysAdmin Feed

Djamgatech: AI Driven Certification Preparation App for iOS, Android, Windows

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Latest DevOps and SysAdmin Feed

DevOps is a set of practices and tools that organizations use to accelerate software development and improve the quality of their software products. It aims to bring development and operations teams together, so they can work more collaboratively and efficiently to deliver software faster and with fewer errors.

The goal of DevOps is to automate as much of the software delivery process as possible, using tools such as continuous integration, continuous delivery, and infrastructure as code. This allows teams to move faster and release new features and bug fixes more frequently, while also reducing the risk of errors and downtime.

DevOps also emphasizes the importance of monitoring, logging, and testing to ensure that software is performing well in production. By continuously monitoring and analyzing performance data, teams can quickly identify and resolve any issues that arise.

In summary, DevOps is a combination of people, processes, and technology that organizations use to improve their software delivery capabilities, increase efficiency, and reduce risk.

What is DevOps in Simple English?

What is a System Administrator?

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

DevOps: In IT world, DevOps means Development Operations. The DevOps is the bridge between the developers, the servers and the infrastructure and his main role is to automate the process of delivering code to operations.
DevOps on wikipedia: is a software development process that emphasizes communication and collaboration between product management, software development, and operations professionals. DevOps also automates the process of software integration, testing, deployment and infrastructure changes.[1][2] It aims to establish a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.

DevOps Latest Feeds

DevOps Resources

  1. What is DevOps? Tackling some frequently asked questions
  2. Find Remote DevOps Jobs here.
Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read

#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!