

Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
What are the top 10 most insane myths about computer programmers?
Programmers are often seen as a eccentric breed. There are many myths about computer programmers that circulate both within and outside of the tech industry. Some of these myths are harmless misconceptions, while others can be damaging to both individual programmers and the industry as a whole.
Here are 10 of the most insane myths about computer programmers:
1. Programmers are all socially awkward nerds who live in their parents’ basements.
2. Programmers only care about computers and have no other interests.
3. Programmers are all genius-level intellects with photographic memories.
4. Programmers can code anything they set their minds to, no matter how complex or impossible it may seem.
5. Programmers only work on solitary projects and never collaborate with others.
6. Programmers write code that is completely error-free on the first try.
7. All programmers use the same coding languages and tools.
8. Programmers can easily find jobs anywhere in the world thanks to the worldwide demand for their skills.
9. Programmers always work in dark, cluttered rooms with dozens of monitors surrounding them.
10. Programmers can’t have successful personal lives because they spend all their time working on code.”
Another Top 10 Myths about computer programmers in details are:
Myth #1: Programmers are lazy.
This couldn’t be further from the truth! Programmers are some of the hardest working people in the tech industry. They are constantly working to improve their skills and keep up with the latest advancements in technology.
Myth #2: Programmers don’t need social skills.
While it is true that programmers don’t need to be extroverts, they do need to have strong social skills. Programmers need to be able to communicate effectively with other members of their team, as well as with clients and customers.
Myth #3: All programmers are nerds.
There is a common misconception that all programmers are nerdy introverts who live in their parents’ basements. This could not be further from the truth! While there are certainly some nerds in the programming community, there are also a lot of outgoing, social people. In fact, programming is a great field for people who want to use their social skills to build relationships and solve problems.
Myth #4: Programmers are just code monkeys.
Programmers are often seen as nothing more than people who write code all day long. However, this could not be further from the truth! Programmers are critical thinkers who use their analytical skills to solve complex problems. They are also creative people who use their coding skills to build new and innovative software applications.
Myth #5: Anyone can learn to code.
This myth is particularly damaging, as it dissuades people from pursuing careers in programming. The reality is that coding is a difficult skill to learn, and it takes years of practice to become a proficient programmer. While it is true that anyone can learn to code, it is important to understand that it is not an easy task.
Myth #6: Programmers don’t need math skills.
This myth is simply not true! Programmers use math every day, whether they’re calculating algorithms or working with big data sets. In fact, many programmers have degrees in mathematics or computer science because they know that math skills are essential for success in the field.
Myth #7: Programming is a dead-end job.
This myth likely comes from the fact that many people view programming as nothing more than code monkey work. However, this could not be further from the truth! Programmers have a wide range of career options available to them, including software engineering, web development, and data science.
Myth #8: Programmers only work on single projects.
Again, this myth likely comes from the outside world’s view of programming as nothing more than coding work. In reality, programmers often work on multiple projects at once. They may be responsible for coding new features for an existing application, developing a new application from scratch, or working on multiple projects simultaneously as part of a team.
Myth #9: Programming is easy once you know how to do it .
This myth is particularly insidious, as it leads people to believe that they can simply learn how to code overnight and become successful programmers immediately thereafter . The reality is that learning how to code takes time , practice , and patience . Even experienced programmers still make mistakes sometimes !
Myth #10: Programmers don’t need formal education
This myth likely stems from the fact that many successful programmers are self-taught . However , this does not mean that formal education is unnecessary . Many employers prefer candidates with degrees in computer science or related fields , and formal education can give you an important foundation in programming concepts and theory .
Myth #11: That they put in immense amounts of time at the job
I worked for 38 years programming computers. During that time, there were two times that I needed to put in significant extra times at the job. The first two years, I spent more time to get acclimated to the job (which I then left at age of 22) with a Blood Pressure 153/105. Not a good situation. The second time was at the end of my career where I was the only person who could get this project completed (due to special knowledge of the area) in the timeframe required. I spent about five months putting a lot of time in.
Myth #12: They need to know advanced math
Some programmers may need to know advanced math, but in the areas where I (and others) were involved with, being able to estimate resulting values and visualization skills were more important. One needs to know that a displayed number is not correct. Visualization skills is the ability to see the “big picture” and envision the associated tasks necessary to make the big picture correctly. You need to be able to decompose each of the associated tasks to limit complexity and make it easier to debug. In general the less complex code is, the fewer errors/bugs and the easier it is to identify and fix them.
Myth #13: Programmers remember thousands lines of code.
No, we don’t. We know approximate part of the program where the problem could be. And could localize it using a debugger or logs – that’s all.
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
Myth #14: Everyone could be a programmer.
No. One must have not only desire to be a programmer but also has some addiction to it. Programming is not closed or elite art. It’s just another human occupation. And as not everyone could be a doctor or a businessman – as not everyone could be a programmer.
Myth #15: Simple business request could be easily implemented
No. The ease of implementation is defined by model used inside the software. And the thing which looks simple to business owners could be almost impossible to implement without significantly changing the model – which could take weeks – and vice versa: seemingly hard business problem could sometimes be implemented in 15 minutes.
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
Myth #16: Please fix <put any electronic device here>or setup my printer – you are a programmer!
Yes, I’m a programmer – neither an electronic engineer nor a system administrator. I write programs, not fix devices, setup software or hardware!
As you can see , there are many myths about computer programmers circulating within and outside of the tech industry . These myths can be damaging to both individual programmers and the industry as a whole . It’s important to dispel these myths so that we can continue attracting top talent into the field of programming !

Google’s Carbon Copy: Is Google’s Carbon Programming language the Right Successor to C++?
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
What are the Greenest or Least Environmentally Friendly Programming Languages?
DevOps Interviews Question and Answers and Scripts


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
DevOps Interviews Question and Answers and Scripts
Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).
Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:
Deployment
What is a Canary Deployment?
A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.
Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.


What is a Blue Green Deployment?
Reference: Blue Green Deployment
Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.
How to a software release?
There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker
How to automate the whole build and release process?
• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
manufacturing.
The above process will be done by Jenkins by creating the jobs.
Did you ever participated in Prod Deployments? If yes what is the procedure?
• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system
Devops Tools and Concepts
What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?
By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.
We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.
The key aspects or principles behind DevOps are:
- Infrastructure as a Code
- Continuous Integration
- Continuous Deployment
- Automation
- Continuous Monitoring
- Security
Popular tools for DevOps are:
- Git
- AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
- Jenkins
- Ansible
- Puppet
- Nagios
- Docker
- ELK (Elasticsearch, Logstash, Kibana)
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
Can we consider DevOps as Agile methodology?
Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.
What are some of the most popular DevOps tools?
Selenium
Puppet
Chef
Git
Jenkins
Ansible
Docker
What is the job Of HTTP REST API in DevOps?
As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
What is Scrum?
Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team
What are Micro services, and how they control proficient DevOps rehearses?
Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
What is Continuous Delivery?
Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.
Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.

Why Automate?
Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.
What is Puppet?
Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.
What is Configuration Management?
Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.
Software Configuration Management Features are:
• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes
What are the Some Of the Most Popular Devops Tools ?
• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible
What Are the Vagrant And Its Uses?
Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.
What’s a PTR in DNS?
Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.
What testing is necessary to insure a new service is ready for production?
Continuous testing
What is Continuous Testing?
It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.
What are the key elements of continuous testing?
Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.
How does HTTP work?
The HTTP protocol works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.
What is IaC? How you will achieve this?
Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.
Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.
What are patterns and anti-patterns of software delivery and deployment?

What are Microservices?
Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.
- Small
- Decoupled
- Owned by self-contained teams
Version Control
What is a version control system?
Version Control System (VCS) is a software that helps software developers to work together and maintain
complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN
What is Source Control?
An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.
Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can
• Collaborate on code with your team.
• Isolate your work until it is ready.
. Quickly troubleshoot issues by identifying who made changes and what the changes were.
Source code management systems help streamline the development process and provide a centralized source for all your code.
What is Git and explain the difference between Git and SVN?
Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.
GIT | SVN |
Git is a Decentralized Version Control Tool | SVN is a Centralized Version Control Tool |
Git contains the local repo as well as the full history of the whole project on all the developers hard drive, so if there is a server outage , you can easily do recovery from your team mates local git repo. | SVN relies only on the central server to store all the versions of the project file |
Push and pull operations are fast | Push and pull operations are slower compared to Git |
It belongs to 3rd generation Version Control Tool | It belongs to 2nd generation Version Control tools |
Client nodes can share the entire repositories on their local system | Version history is stored on server-side repository |
Commits can be done offline too | Commits can be done only online |
Work are shared automatically by commit | Nothing is shared automatically |
Describe branching strategies?
Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.
Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.
Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.
What are Pull requests?
Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements.
Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.
Linux
What is the default file permissions for the file and how can I modify it?
Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666
What is a kernel?
A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.
What is difference between grep -i and grep -v?
i ignore alphabet difference v accept this value
Example: ls | grep -i docker
Dockerfile
docker.tar.gz
ls | grep -v docker
Desktop
Dockerfile
Documents
Downloads
You can’t see anything with name docker.tar.gz
How can you define particular space to the file?
This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1
What is concept of sudo in linux?
Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.
What are the checks to be done when a Linux build server become suddenly slow?
Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time
Jenkins
What is Jenkins?
Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team
What is the difference between Maven, Ant and Jenkins?
Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool
What is continuous integration?
When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.
The microservices architecture is a design approach to build a single application as a set of small services.
What are the advantages of Jenkins?
• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.
Which SCM tools does Jenkins supports?
Source code management tools supported by Jenkins are below:
• AccuRev
• CVS
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase
• RTC
I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs
In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.
How can I integrate all the tools with Jenkins?
I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.
How to install Jenkins via Docker?
The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:
• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard
• Accessing the Jenkins/Blue Ocean Docker container:
docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:
docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash
Bash – Shell scripting
Write a shell script to add two numbers
echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”
How to get a file that consists of last 10 lines of the some other file?
Tail -10 filename >filename
How to check the exit status of the commands?
echo $?
How to get the information from file which consists of the word “GangBoard”?
grep “GangBoard” filename
How to search the files with the name of “GangBoard”?
find / -type f -name “*GangBoard*”
Write a shell script to print only prime numbers?

How to pass the parameters to the script and how can I get those parameters?
Scriptname.sh parameter1 parameter2
Use $* to get the parameters.
Monitoring – Refactoring
My application is not coming up for some reason? How can you bring it up?
We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.
What is multifactor authentication? What is the use of it?
Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.
• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform
I want to copy the artifacts from one location to another location in cloud. How?
Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.
How to delete 10 days older log files?
find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null
Ansible
What are the Advantages of Ansible?
• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural
What’s the use of Ansible?
Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.
What are the Pros and Cons of Ansible?
Pros:
1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
Cons:
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage
What is the difference among chef, puppet and ansible?
Chef | Puppet | Ansible||
Interoperability | Works Only on Linux/Unix | Works Only on Linux/Unix | Supports Windows but server should be Linux/Unix|
Configuration Laguage | Uses Ruby | Pupper DSL | YAML (Python)|
Availability | Primary Server and Backup Server | Multi Master Architecture | Single Active Node
How to access variable names in Ansible?
Using hostvars method we can access and add the variables like below
{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}
Docker
What is Docker?
Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.
What is Docker image?
Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.
What is a Docker Container?
Docker Container is the running instance of Docker Image
How to stop and restart the Docker container?
To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID
What platforms does Docker run on?
Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+
Cloud:
• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace
Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows
What are the tools used for docker networking?
For docker networking we generally use kubernets and docker swarm.
What is docker compose?
Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.
How to deploy docker container to aws?
Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.
What is the fundamental disservice of Docker holders?
As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.
What are the docker motor and docker form?
Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP
What are the Different modes does a holder can be run?
Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.
What the yield of docker assess order will be?
Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type
What is docker swarm?
Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.
What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?
Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.
How to Version control Docker pictures?
Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.
What is difference between docker image and docker container?
Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.
What is Application Containerization?
It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.
What is the syntax for building docker image?
docker build –f -t imagename:version
What is the running docker image?
docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version
How to log into a container?
docker exec –it /bin/bash
Git
What does the commit object contain?
Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).
Explain the difference between git pull and git fetch?
Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge
How do we know in Git if a branch has already been merged into master?
git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged
What is ‘Staging Area’ or ‘Index’ in GIT?
Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add
What is Git Stash?
Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.
What is Git stash drop?
Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log
What is the function of ‘git config’?
Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global user.name “Your Name”: This command will add your username.
git config –global user.email “Your E-mail Address”: This command will add your email id.
How can you create a repository in Git?
To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.
What language is used in Git?
Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.
What is SubGit?
SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.
How can you clone a Git repository via Jenkins?
First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.
What are the advantages of using Git?
1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.
What is git add?
It adds the file changes to the staging area
What is git commit?
Commits the changes to the HEAD (staging area)
What is git push?
Sends the changes to the remote repository
What is git checkout?
Switch branch or restore working files
What is git branch?
Creates a branch
What is git fetch?
Fetch the latest history from the remote server and updates the local repo
What is git merge?
Joins two or more branches together
What is git pull?
Fetch from and integrate with another repository or a local branch (git fetch + git merge
What is git rebase?
Process of moving or combining a sequence of commits to a new base commit
What is git revert?
To revert a commit that has already been published and made public
What is git clone?
Clones the git repository and creates a working copy in the local machine
How can I modify the commit message in git?
I have to use following command and enter the required message.
Git commit –amend
How you handle the merge conflicts in git
Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.
What is Git command to send the modifications to the master branch of your remote repository
Use the command “git push origin master”
NOSQL
What are the benefits of NoSQL database on RDBMS?
Benefits:
1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected
Maven
What is Maven?
Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories
Numpy
What is Numpy
There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc.
Why is python numpy better than lists?
Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.
Describe the map function in Python?
The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.
How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy
###
will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

How to count the occurrence of each value in a numpy array?
Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.
Ouput: [4 1 1 1 2 3 0 0 0 2]
Does Numpy Support Nan?
nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax
computers.
What does ravel() function in numpy do?
It combines multiple numpy arrays into a single array
How to remove from one array those items that exist in another?
>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])
How to reverse a numpy array in the most efficient way?
>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]
How to calculate percentiles when using numpy?
>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)
Output: 37.400000000000006
What Is The Difference Between Numpy And Scipy?
NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.
What Is The Preferred Way To Check For An Empty (zero Element) Array?
For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size
What Is The Difference Between Matrices And Arrays?
Matrices can only be two-dimensional, whereas arrays can have any number of
dimensions
How can you find the indices of an array where a condition is true?
Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)
How to find the maximum and minimum value of a given flattened array?
>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)
Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis.
>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)
Find median of a numpy flattened array
>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)
Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis
>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x
Calculate covariance matrix between two numpy arrays
>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)
Compute product-moment correlation coefficients of two given numpy arrays
>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)
Develop a numpy program to compute the histogram of nums against the bins
>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)
Get the powers of an array values element-wise
>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)
Write a NumPy program to get true division of the element-wise array inputs
>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)
Panda
What is a series in pandas?
A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.
What features make Pandas such a reliable option to store tabular data?
Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.
What is re-indexing in pandas?
Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.
How will you create a series from dict in Pandas?
A Series is defined as a one-dimensional array that is capable of storing various data
types.
import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)
How can we create a copy of the series in Pandas?
Use pandas.Series.copy method
import pandas as pd
pd.Series.copy(deep=True)
What is groupby in Pandas?
GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and
easy.
What is vectorization in Pandas?
Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.
Different types of Data Structures in Pandas
Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.
What Is Time Series In pandas
A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.
How to convert pandas dataframe to numpy array?
The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.
Write a Pandas program to get the first 5 rows of a given DataFrame
>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]
Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data.
>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])
Write a Python program to convert a Panda module Series to Python list and it’s type.
>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.
>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2Develop a Pandas program to compare the elements of the two Pandas Series.
>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2Develop a Pandas program to change the data type of given a column or a Series.
>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2Write a Pandas program to convert Series of lists to one Series
>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)Write a Pandas program to create a subset of a given series based on value and condition
>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s
Develop a Pandas code to alter the order of index in a given series
>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])
Write a Pandas code to get the items of a given series not present in another given series.
>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result
What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?
First one is a view of the original dataframe and second one is a copy of the original dataframe.
Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.
>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’
Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.
>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)
How will you add a column to a pandas DataFrame?
# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])
How to iterate over a Pandas DataFrame?
You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.
Python
What type of language is python? Programming or scripting?
Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.
Is python case sensitive?
Yes, python is a case sensitive language.
What is a lambda function in python?
An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.
What is the difference between xrange and xrange in python?
xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.
What are docstrings in python?
Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.
Whenever Python exits, why isn’t all the memory deallocated?
Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.
What does this mean: *args, **kwargs? And why would we use it?
We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.
What is the difference between deep and shallow copy?
Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.
Define encapsulation in Python?
Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.
Does python make use of access specifiers?
Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.
What are the generators in Python?
Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.
Write a Python script to Python to find palindrome of a sequence
a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
else:
print (“not palindrome”)
How will you remove the duplicate elements from the given list?
The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))
Does Python allow arguments Pass by Value or Pass by Reference?
Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.
What is slicing in Python?
Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.
Why is the “pass” keyword used in Python?
The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.
What are decorators in Python?
Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion
What is the key difference between lists and tuples in python?
The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.
What is self in Python?
Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.
What is PYTHONPATH in Python?
PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.
What is the difference between .py and .pyc files?
.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.
What is namespace in Python?
In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.
What is pickling and unpickling?
Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.
How is Python interpreted?
Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.
Jupyter Notebook
What is the main use of a Jupyter notebook?
Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.
How do I increase the cell width of the Jupyter/ipython notebook in my browser?
>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))
How do I convert an IPython Notebook into a Python file via command line?
>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb
How to measure execution time in a jupyter notebook?
>> %%time is inbuilt magic command
How to run a jupyter notebook from the command line?
>> jupyter nbconvert –to python nb.ipynb
How to make inline plots larger in jupyter notebooks?
Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)
How to display multiple images in a jupyter notebook?
>>for ima in images:
>>>plt.figure()
>>>plt.imshow(ima)
Why is the Jupyter notebook interactive code and data exploration friendly?
The ipywidgets package provides many common user interface controls for exploring code and data interactively.
What is the default formatting option in jupyter notebook?
Default formatting option is markdown
What are kernel wrappers in jupyter?
Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.
What are the advantages of custom magic commands?
Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.
Is the jupyter architecture language dependent?
No. It is language independent
Which tools allow jupyter notebooks to easily convert to pdf and html?
Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.
What is a major disadvantage of a Jupyter notebook?
It is very hard to run long asynchronous tasks. Less Secure.
In which domain is the jupyter notebook widely used?
It is mainly used for data analysis and machine learning related tasks.
What are alternatives to jupyter notebook?
PyCharm interact, VS Code Python Interactive etc.
Where can you make configuration changes to the jupyter notebook?
In the config file located at ~/.ipython/profile_default/ipython_config.py
Which magic command is used to run python code from jupyter notebook?
%run can execute python code from .py files
How to pass variables across the notebooks in Jupyter?
The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)
Export the contents of a cell/Show the contents of an external script
Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.
What inbuilt tool we use for debugging python code in a jupyter notebook?
Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.
How to make high resolution plots in a jupyter notebook?
>> %config InlineBackend.figure_format =’retina’
How can one use latex in a jupyter notebook?
When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.
What is a jupyter lab?
It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.
What is the biggest limitation for a Jupyter notebook?
Code versioning, management and debugging is not scalable in current jupyter notebook
Cloud Computing
[appbox googleplay com.cloudeducation.free]
[appbox appstore id1560083470-iphone screenshots]
Which are the different layers that define cloud architecture?
Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus
Explain Cloud Service Models?
Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)
What are Hybrid clouds?
Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.
Explain Platform as a Service (Paas)?
It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.
What is the difference in cloud computing and Mobile Cloud computing?
Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.
What are the security aspects provided with the cloud?
There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications
What are system integrators in cloud computing?
System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.
What is the usage of utility computing?
Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.
What are some large cloud providers and databases?
Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL
Explain the difference between cloud and traditional data centers.
In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.
What is hypervisor in Cloud Computing?
It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.
Define what MultiCloud is?
Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.
What is a multi-cloud strategy?
The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.
What is meant by Edge Computing, and how is it related to the cloud?
Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.
What are disadvantages of SaaS cloud computing layer
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.
What is IaaS in Cloud Computing?
IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.
Explain what is the use of “EUCALYPTUS” in cloud computing?
EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.
Name the foremost refined and restrictive service model?
The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.
Name all the kind of virtualization that are also characteristics of cloud computing?
Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.
What Are Main Features Of Cloud Services?
Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades
What Are The Advantages Of Cloud Services?
Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.
Mention The Basic Components Of A Server Computer In Cloud Computing?
The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.
What are the advantages of auto-scaling?
Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management
[appbox googleplay com.cloudeducation.free]
[appbox appstore id1560083470-iphone screenshots]
Azure Cloud

#Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure
ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648
android: https://play.google.com/store/apps/dev?id=4679760081477077763
windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0
web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)
Which Services Are Provided By Window Azure Operating System?
Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management
Which service in Azure is used to manage resources in Azure?
Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.
Which web applications can be deployed with Azure?
Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.
What are Roles in Azure and why do we use them?
Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal.
There are 3 types of roles in Microsoft Azure:
● Web Role
● Worker Role
● VM Role
Let’s discuss each of these roles in detail:
● Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications.
● Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website.
● VM Role – The VM role is used by a user to schedule tasks and other windows services.
This role can be used to customize the machines on which the web and worker role is running.
What is Azure as PaaS?
PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.
What are Break-fix issues in Microsoft Azure?
In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.
Explain Diagnostics in Windows Azure
Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in
Windows Azure as well as in the computer’s emulator for collecting data for a role instance.
State the difference between repetitive and minimal monitoring.
Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application.
On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.
What is the main difference between the repository and the powerhouse server?
The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.
Explain command task in Microsoft Azure
Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.
What is the difference between Azure Service Bus Queues and Storage Queues?
Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues.
Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services.
Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.
Explain Azure Service Fabric.
Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications.
The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.
Define the Azure Redis Cache.
Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.
How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?
TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.
What are the options to manage session state in Windows Azure?
● Windows Azure Caching
● SQL Azure
● Azure Table
What is cspack?
It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.
What is csrun?
It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.
How to design applications to handle connection failure in Windows Azure?
The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).
What is Windows Azure Diagnostics?
Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.
What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?
Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.
What is the use of Azure Active Directory?
Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network
Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?
This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.
What are virtual machine scale sets in Azure?
Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.
Are data disks supported within scale sets?
Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include:
● Azure files (SMB shared drives)
● OS drive
● Temp drive (local, not backed by Azure Storage)
● Azure data service (for example, Azure tables, Azure blobs)
● External data service (for example, remote database)
What is the difference between the Windows Azure Platform and Windows Azure?
The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.
What are the three main components of the Windows Azure Platform?
Compute, Storage and AppFabric.
Can you move a resource from one group to another?
Yes, you can. A resource can be moved among resource groups.
How many resource groups a subscription can have?
A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.
Explain the fault domain.
This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.
Differentiate between the repository and the powerhouse server?
Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.

#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech
AWS Cloud

Explain what S3 is?
S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”
What is AMI?
AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.
Mention what the relationship between an instance and AMI is?
From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.
How many buckets can you create in AWS by default?
By default, you can create up to 100 buckets in each of your AWS accounts.
Explain can you vertically scale an Amazon instance? How?
Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again
Explain what T2 instances is?
T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.
In VPC with private and public subnets, database servers should ideally be launched into which subnet?
With private and public subnets in VPC, database servers should ideally launch into private subnets.
Mention what the security best practices for Amazon EC2 are?
For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI
Is the property of broadcast or multicast supported by Amazon VPC?
No, currently Amazon VPI not provide support for broadcast or multicast.
How many Elastic IPs is allows you to create by AWS?
5 VPC Elastic IP addresses are allowed for each AWS account.
Explain default storage class in S3
The default storage class is a Standard frequently accessed.
What are the Roles in AWS?
Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.
What are the edge locations?
Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.
Explain snowball?
Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.
What is a redshift?
Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.
What is meant by subnet?
A large section of IP Address divided into chunks is known as subnets.
Can you establish a Peering connection to a VPC in a different region?
Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.
What is SQS?
Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.
How many subnets can you have per VPC?
You can have 200 subnets per VPC.
What is Amazon EMR?
EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.
What is boot time taken for the instance stored backed AMI?
The boot time for an Amazon instance store-backend AMI is less than 5 minutes.
Do you need an internet gateway to use peering connections?
Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.
How to connect an EBS volume to multiple instances?
We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.
What are the different types of Load Balancer in AWS services?
Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer
In which situation you will select provisioned IOPS over standard RDS storage?
You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.
What are the important features of Amazon cloud search?
Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice
What is AWS CDK?
AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
• Create and provision AWS infrastructure deployments predictably and repeatedly.
• Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling.
• Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
• Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.
What are best practices for controlling acccess to AWS CodeCommit?
– Create your own policy
– Provide temporary access credentials to access your repo
* Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod
* Federated access
* Multi-factor authentication
What is AWS CodeCobuild?
AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.
1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file.
2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build.
3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository.
4- Build artifacts produced are uploaded into an Amazon S3 bucket.
5- he build environment sends a notification about the build status.
6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.
What is AWS CodeDeploy?
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.
With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.
You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.
You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.
AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision
What is AWS CodeCommit?
AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.
What is AWS Opswork?
AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.
Chef and Puppet enable you to use code to automate your configurations.
AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.
AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.
AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software

Google Cloud Platform

What are the main advantages of using Google Cloud Platform?
Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
service
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform.
Why should you opt to Google Cloud Hosting?
The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability
What are the libraries and tools for cloud storage on GCP?
At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
buckets
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage
There are many third party libraries and tools such as Boto Library.
What do you know about Google Compute Engine?
Google Cloud Engine is the basic component of the Google Cloud Platform.
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.
How are the Google Compute Engine and Google App Engine related?
Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage
system.
How does the pricing model work in GCP cloud?
While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store.
The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network.
What are the different methods for the authentication of Google Compute Engine API?
This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API:
– Using OAuth 2.0
– Through client library
– Directly with an access token
List some Database services by GCP.
There are many Google cloud database services which helps many enterprises to manage their data.
● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud.
● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity.
● Cloud Spanner
● Cloud Bigtable
● Firestore
● Firebase Realtime Database
● Memorystore
● Google Cloud Partner Services
● For more database products you can refer Google Cloud Databases
● For more data base solutions you can refer Google cloud Database solutions
What are the different Network services by GCP?
Google Cloud provides many Networking services and technologies that make easy to scale and manage your network.
● Hybrid connectivity helps to connect your infrastructure to Google Cloud
● Virtual Private Cloud (VPC) manage networking for your resources
● Cloud DNS is a highly available global domain naming system (DNS) network.
● Service Directory provides a service-centric network solution.
● Cloud Load Balancing
● Cloud CDN
● Cloud Armor
● Cloud NAT
● Network Telemetry
● VPC Service Controls
● Network Intelligence Center
● Network Service Tiers
● For more about Networking products refer Google Cloud Networking
List some Data Analytics service by GCP.
Google Cloud offers various Data Analytics services.
● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective.
● Looker
● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud.
● Dataflow
● Pub/Sub
● Cloud Data Fusion
● Data Catalog
● Cloud Composer
● Google Data Studio
● Dataprep
● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale.
● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services.
● For Google Cloud analytics services visit Data Analytics
Explain Google BigQuery in Google Cloud Platform
For traditional data warehouse, hardware setup replacement is required. In such case, Google
BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.
Explain Auto-scaling in Google cloud computing
Without human intervention, you can mechanically provision and initiate new instances in AWS.
Depending on various metrics and load, Auto-scaling is triggered.
Describe Hypervisor in Google Cloud Platform
Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.
Define VPC in the Google cloud platform
VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.

References
Steve Nouri
https://www.edureka.co
https://www.kausalvikash.in
https://www.wisdomjobs.com
https://blog.edugrad.com
https://stackoverflow.com
http://www.ezdev.org
https://www.techbeamers.com
https://www.w3resource.com
https://www.javatpoint.com
https://analyticsindiamag.com
Online Interview Questions
https://www.geeksforgeeks.org
https://www.springpeople.com
https://atraininghub.com
https://www.interviewcake.com
https://www.techbeamers.com
https://www.tutorialspoint.com
programming with mosh.com
https://www.interviewbit.com
https://www.guru99.com
https://hub.packtpub.com
https://analyticsindiamag.com
https://www.dataquest.io
https://www.infoworld.com
Benefits of Microservices:

Technological freedom, which can lead to faster innovation
Microservices architectures don’t follow a “one size fits all” approach. Teams have the freedom to choose the best tool to solve their specific problems. As a consequence, teams building microservices can choose the best tool for each job.
Reusable code and short time to add new features
Dividing software into small, well-defined modules enables teams to use functions for multiple purposes. A service written for a certain function can be used as a building block for another feature. This allows an application to bootstrap off itself, as developers can create new capabilities without writing code from scratch.
Resilience
Service independence increases an application’s resistance to failure. In a monolithic architecture, if a single component fails, it can cause the entire application to fail. With microservices, applications handle total service failure by degrading functionality and not crashing the entire application.

How do you achieve low latency in microservices?
Don’t do a connection setup per RPC.
Cache things wherever possible.
Write asynchronous code wherever possible.
Exploit eventual consistency wherever possible. Otherwise known as, coordination is expensive so don’t do it unless you have to.
Route your requests sensibly.
Locate processing wherever will result in the best latency. That might mean you need more resources.
Use LIFO queues, they have better tail statistics than FIFO. Queue before load balancing, not after, that way a small fraction of slow requests are much less likely to stall all the processors. Source: Andrew mc Gregor
What operating system do most servers use in 2022?
Of the 1500 *NIX servers under my control (a very large fortune 500 company), 90% of them are Linux. We have a small amount of HP-UX and AIX left over running legacy applications, but they are being phased out. Most of the applications we used to run on HP-UX and AIX (SAP, Oracle, you-name-it) now run on Linux. And it’s not just my company, it’s everywhere.
In 2022, the most widely used server operating system is Linux. Source: Bill Thompson
How do you load multiple files in parallel from an Amazon S3 bucket?
How can you manage the amount of provisioned throughput that is used when copying from an Amazon DynamoDB table?
What you must do to use client-side encryption with your own encryption keys when using COPY to load data files that were uploaded to Amazon S3?
DevOps and SysOps Breaking News – Top Stories – Jobs
- Drive destructionby /u/Darkhexical (Sysadmin) on April 23, 2025 at 12:19 am
Hey so I was watching this YouTube short that highlighted some issues certain IT departments in the recycling industry are facing. One of the problems they mentioned, and you probably guessed it, is data destruction. Apparently, it's somewhat common for them to just take a drill and pierce laptops without removing the drive first or even knowing its location. Someone even managed to drill into a battery! While I know it's standard to send devices for professional data destruction with recyclers, and some even offer it as a free service, I also know some departments still have their employees physically drill into the drives before sending them off. This seems like it could easily lead to the kinds of errors mentioned in the short, likely because of the sheer number of devices they're dealing with, leading to guesswork on drive placement. So, I'm curious to get your take on this practice. See video below: https://youtube.com/shorts/GeMhwaF3Sno submitted by /u/Darkhexical [link] [comments]
- We built a tool to deploy from Cursor or Claude with one promptby /u/Live-Pea-5362 (Everything DevOps) on April 22, 2025 at 11:11 pm
👋 Hey DevOps folks We built an MCP server that lets you deploy your app to the cloud just by typing deploy inside your IDE chat (like Cursor or Claude). Right now, it deploys to our Playground and we’re working on AWS, GCP, and DigitalOcean support next. Here’s a quick demo video showing how it works: 🎥 https://www.linkedin.com/feed/update/urn:li:activity:7320490826004852737/ Docs if you want to explore or test it. Any feedback would be appreciated! 💙 submitted by /u/Live-Pea-5362 [link] [comments]
- MS Purview and Sharepoint are disgraces. Microsoft Graph is a disgrace.by /u/sarge21 (Sysadmin) on April 22, 2025 at 10:25 pm
Imagine you are trying to search for a purview retention event based on the description (or really any other) property. It seems Microsoft has made this impossible. You could load up the retention event list in the Web UI. If the list of events ever loads (it may take several minutes or time out if you have like a thousand events created ever), you must click through one by one and manually visually compare the property. You might think Powershell could do this. Get-MgBetaSecurityTriggerRetentionEvent -RetentionEventId "GUID" will return a retention event with all the properties filled out. However, this only works if you know the event ID. If you list retention events (Get-MgBetaSecurityTriggerRetentionEvent -All) the properties are null. You might think you could get around this. Add "-property Description"? Query option 'Select' is not allowed. Add "-filter" based on a query? Query option 'Filter' is not allowed. The only option that seems to work is $events = Get-MgBetaSecurityTriggerRetentionEvent -All Wait like 20 minutes for it to return depending on how many events you have iterate through each event, doing an individual Get-MgBetaSecurityTriggerRetentionEvent for each ID, which takes about 10 seconds to return If you have 1000 retention events, I estimate you'd be waiting around 4 hours for this process to complete. submitted by /u/sarge21 [link] [comments]
- a hug from me (freelance it tech) to anyone who has had to deal with IT support from India of any kind.by /u/canadiansmartdude13 (Sysadmin) on April 22, 2025 at 10:25 pm
The title. I’m a freelance IT tech pretty much doing anything IT related. (which apparently includes janitorial duties) Basically a fieldnation person but without the crazy fees. If you have ever had to deal with remote techs in India I am sorry and owe you the biggest hug, handshake, drink, and your snacks of choice. Because wtf. I’m usually the considerate guy, but I hate with a burning passion more than stepping on legos companies that outsource their IT. Some people there are okay, but that is the exception not the norm. I literally had to deal with incorrect documentation being sent, them not responding from anywhere from a few minutes to hours, and my personal favorite——being verbally abused for over seven hours on a Teams call (from 1am to 12:30pm eastern) for above reasons on guess what, my 19th birthday. I’ve worked in in house teams that are housed physically within the company in the same country. You have problems there too and dicks there too. But at least you’re not being held hostage on the site, and have a formal chain of command to report difficult people period. For any org descisionmakers reading this, please don’t offshore stuff like IT. Those cost savings are not going to help in the long run and will cost you more down the line. Because now you have to spend money to get a freelance tech as myself, to fix an issue that YOUR INTERNAL IT TEAM could fix in probably less the time. For my fellow IT soldiers, I love you. Just took my SSRI after not being home for 36 hours, in bed, took my sleep meds, and will now try to cleanse my brain of the trauma. Pouring MULTIPLE out for you, and please send hugs my way. submitted by /u/canadiansmartdude13 [link] [comments]
- NPS: What am i missing?by /u/gsatmobile (Sysadmin) on April 22, 2025 at 10:01 pm
Hi All Fellow sysadmin banging head against the wall. I am setting up NPS Radius server to work with our Cisco Firepower and authenticate with Azure MFA for 2nd Factor authentication. It has been a learning experience so far. We have used OKTA radius authentication for the last decade and currently exploring other options. I don’t think the request is even getting to Azure for authentication, it’s getting blocked on NPS side. Here are the event viewer errors: NPS Error - Authentication Details: Connection Request Policy Name: Cisco Firepower Requests Network Policy Name: Cisco Firepower VPN Users Authentication Provider: Windows Authentication Server: seanps01.contoso.com Authentication Type: Extension EAP Type: Account Session Identifier: Logging Results: Accounting information was written to the local log file. Reason Code: 21 Reason: An NPS extension dynamic link library (DLL) that is installed on the NPS server rejected the connection request. Azure MFA Error - NPS Extension for Azure MFA: NPS Extension for Azure MFA only performs Secondary Auth for Radius requests in AccessAccept State. Request received for User sholmes with response state AccessReject, ignoring request. Error Code is 21. Windows Server 2019 (Datacenter license) NPS installed IIS installed DigiCert SSL basic OV cert for server authentication and EKU installed Created corp group nps-mfa group. Users within group have Entra P1 licenses Azure MFA extension is installed (3x times) TLS 1.2 is enabled. AD Forest and Domain Level is 2008 Domain Controllers are on Windows Server 2019 NPS Configuration details NPS configuration is selected as RADIUS server or VPN, using default Port 1812 Server has been registered in AD Radius Client setup as: Enable this Radius Client - checked IP address for Cisco Firepower Shared Secret same as in Cisco Firepower Advanced - Vendor Name – RADIUS Client Additional Options – not checked Policies Connection Request Policy Name: Cisco Firepower Requests Policy State – Policy Enabled Type of Network Access Server – Unspecified Conditions – Client IPV4 Address – same as Firepower IP Settings: Authentication Methods – Overwrite Network Policy Settings – unchecked Forward Connection Request – Authentication – Authenticate on this server (checked) Accounting – no selections Specify Realm Name – Attribute – User Name Find .*\(.*)$ Replace with $2@contoso.com Find [@\]+)$ Replace with $1@contoso.com Radius Attribute – Standard – no selections Radius Attribute – Vendor Specific – no selections Network Policy Name: Cisco Firepower VPN Users Policy State – Policy Enabled Access Permission – Grant Access Ignore User’s Dial-in properties – checked Network Connection Method – unspecified Conditions – Windows Groups – corp\nps-mfa Constrains: Authentication Methods: Microsoft Secure Password (EAP-MSCHAP v2) Microsoft Protected EAP (PEAP) – Properties – DigiCert Basic OV Cert Enable fast reconnect checked Disconnect Clients without crypto binding is unchecked EAP Types is EAP-MSCHAP v2 Less Secure Authentication Methods – none are checked Idle Time out – default not checked Session Timeout – default not checked Called Station ID – default not checked Day and Time Restriction – default not checked NAS Port Type: Common Dial Up and VPN tunnel types – Virtual VPN Common Connection Tunnel Type – unchecked Others - Virtual VPN Accounting is configured for local file logs. submitted by /u/gsatmobile [link] [comments]
- Prtg open source alternative optionsby /u/Low_Metal_7679 (Sysadmin) on April 22, 2025 at 8:53 pm
Hello, We are currently using PRTG, but due to the recent price increase, we are considering open-source alternatives. I've identified three potential solutions and would like your thoughts on them: Prometheus with Grafana This combination has a solid concept, but I'm curious about the management aspect. Is it purely configuration-based? Checkmk (Raw) Checkmk appears straightforward and seems to meet our needs effectively. Zabbix Similar to Checkmk, but offers more customization options. Current Monitoring Requirements: Servers: Windows, Linux, VMware, Citrix, Netscalers Network Devices: Switches, Routers, Firewalls, Wi-Fi APs, PDUs, Access Controllers, Sun Solar Systems, IP Cameras Remote Cloud Servers Remote Sites: Connected via WAN Printers API Endpoints: SAP, NetBox, Ansible The chosen solution should support a high-availability (HA) setup. Looking forward to your feedback! submitted by /u/Low_Metal_7679 [link] [comments]
- Updated: End-to-end DevOps hands-on projectby /u/aabouzaid (Everything DevOps) on April 22, 2025 at 8:51 pm
TL;DR As the Continues Improvement and Feedback Loopsis are ones of the DevOps principles ... so based on the users feedback I've updated the end-to-end DevOps hands-on project part of the FREE pragmatic Dynamic DevOps Roadmap. https://devopsroadmap.io/projects/hivebox/ Background Now starting the project is easier than ever even for people with basic DevOps knowledge. Who see the project for the first time ... this free/open-source roadmap focuses on the principles instead of just tools and it uses an iterative approach the same as in the real-work. Enjoy ♾️ submitted by /u/aabouzaid [link] [comments]
- Linux servers authentication for a Windows shopby /u/Zoddo98 (Sysadmin) on April 22, 2025 at 8:39 pm
Hello, I'm interested in some feedback about how primarily-Windows shops handle admin authentication when they start to have a handful of Linux servers. For the context, we have about 15-20 Linux servers. They were all installed manually by different people over the last 6 years, with differents ways to ssh in (some servers have a single admin user with a shared ssh key + sudo, some servers are joined to our windows domain (using winbind), and we login using our domain user/pass, and some of them are just configured to login directly with a password as root). Most of these servers are running a now-EOL Debian release, and as the "linux guy" of the team I finally got allocated time to tackle this mess. Basically, over the next few months, I'll have the opportunity to properly rebuild all these servers from scratch. I'm currently writing playbooks to model the baseline config of these new servers, and I came across the question of how we should manage (remote) admin access. Ideally, we want every admin to login using their own account for logging/accountability purposes. I can see a few solutions : Provision local accounts for every admin + their SSH keys on each server (I'll be using Ansible, so this can be part of a playbook). This is the easy configuration, but we lose the concept of "our Active Directory is the central identity/authorization directory where we manage all access". Use SSH certificates. Frankly, I just discovered this existed. In theory, this could be used to issue ephemeral certificates after validating authorization with our AD. However, there doesn't seem to have easy and mature implementations, outside of commercial, larger products (HashCorp, Teleport, Smallstep...) that I wouldn't be able to justify their cost just for that. And finally, unless I missed something, that still requires to provision user accounts on every servers. Use Kerberos. OpenSSH supports it out of the box, and we are a Windows-shop, so this is something that is already tightly integrated in our environment. This would allow us to reuse our already existing admin credentials, which are already properly secured/audited. We don't have to provision users, as nss can pull the user list from our AD. However, this previous point is also an issue, as this requires servers to be able to reach domain controllers, which is something I'd like to avoid for the subset of servers hosting internet-facing services. So this means we will need to mix this solution with one of the other solutions, which questions the actual benefit of this option, considering we will have to manage 2 separate authentication methods in parallel. So, as you see, this isn't a simple point. So I'd like to hear what's your thoughts? How do companies in a similar setup handle that? submitted by /u/Zoddo98 [link] [comments]
- Do you cut all your cabling when moving office buildings?by /u/lambusdean77 (Sysadmin) on April 22, 2025 at 7:13 pm
So this may be a dumb question but I have never done this before so I figured I'd ask folks with experience. Our company is going mostly remote, downsizing from two floors of a large office building to maybe 8 rooms in a shared space. We currently have a server rack here that has the punch down blocks wired for the entire 4th floor and a significant portion of the 3rd floor. I'm told that the rack, including the punch-down block, belongs to us. If we were to take the whole rack fixture with us, that means we would have to cut all the punch-down cables, killing all the ethernet jacks in the walls on two floors. Is this standard practice? If it is, that's cool. I guess I just feel like a jerk making the incoming tenant pay to have all that stuff rewired lol submitted by /u/lambusdean77 [link] [comments]
- Switching to Devopsby /u/Ashamed_You353 (Everything DevOps) on April 22, 2025 at 6:55 pm
Hello everyone, I hope you all had a great Easter and managed to get some good rest. I would really appreciate some mindset advice. I have been working for 5.5 years as a Cisco TAC engineer, mainly focused on Software Defined Access (SDA). Recently, Cisco shut down the entire TAC in Belgium, and now I am at a turning point. I am trying to decide whether I should continue deepening my knowledge in networking or shift towards DevOps. My aim is to stay useful in the job market and focus on a technology that is not vendor locked and is likely to stay relevant in the long term. For those of you who have transitioned into DevOps recently — how has it been? Do you enjoy it? Would you make the same choice again? Thank you for any insights you can share! submitted by /u/Ashamed_You353 [link] [comments]
- Dell vs. Lenovoby /u/GriffonTheCat (Sysadmin) on April 22, 2025 at 6:44 pm
For as long as I've worked at my org, we've been a Dell shop. However, I'm thinking of switching us to Lenovo. I haven't been thrilled with Dell's hardware quality, price, or customer support. I spoke with a Lenovo rep last week and liked the demonstration that he gave. However, my boss is more skeptical. Apparently, we used to be a Lenovo shop and had many hardware issues (broken ports, keyboards, system boards, etc.) So here are my questions for those with experience: Are my boss' concerns valid? Are these hardware issues still common? Our replacement cycle is every 4 years. I don't want to be sending 20% or more of our fleet back for repairs in 2 years. For those who made the switch from Dell to Lenovo or vice versa, are you happy with that decision? What have been the pros/cons? How has your Lenovo tech support experience been? We can accept slightly more service requests if we're getting streamlined support. submitted by /u/GriffonTheCat [link] [comments]
- Who’s gets administrator rights on their pc at your org?by /u/BuiltOnXP (Sysadmin) on April 22, 2025 at 6:09 pm
I am curious what type of employees are granted admin rights on their PCs at your place of work. I see a lot of PLC users being added to Administrators on their PCs. What cases are common for you and how often do you use temporary admin access instead? submitted by /u/BuiltOnXP [link] [comments]
- Continous java profiling to improve open source observabilityby /u/opencodeWrangler (Everything DevOps) on April 22, 2025 at 5:51 pm
It's been a common request to add java profiling within the Coroot community - an observability project I'm a part of that looks at turning telemetry into root cause insights (with open source, so easy network monitoring isn't only accessible to companies with budgets for giant vendors.) The feature has been updated now and hopefully it can help some members of this sub too. Nikolay Sivko's written a blog that walks through how you can use it without any code changes to detect high CPU usage and GC pauses in a Java service. You can check out our Github if you'd like to give it a try, and we'd love any feedback to help improve OSS resources for everyone! submitted by /u/opencodeWrangler [link] [comments]
- Redundant power supply unit for a single power supply device. NOT to guard against power loss, but to guard against PSU loss.by /u/WhatsUpB1tches (Sysadmin) on April 22, 2025 at 4:31 pm
Hello all. I am looking to see if a hardware technology exists to allow me to add another power supply to a server that only has a slot for one. I did a bunch of searching and didn't really come up with anything. I found an old post that is somewhat related, but it talks about ATS' for circuit redundancy. If the actual PSU burns, you are still out of luck. I am thinking about some sort of rack mountable device that has 2 PSUs in it, and some sort of adaptor that slides into the slot in the server where the original PSU goes. Sort of "externalizing" the PSUs. I could then attach each PSU in the device to different circuits, thereby getting both circuit AND PSU redundancy. Any and all advice or recommendations are appreciated. Edit: Amazing how people just say the same thing over and over. " Upgrade your hardware". Yes, no shit. "An ATS is what you need." No, it isn't, read the post and comments. " Buy a machine designed for it", " This isn't homelab, don't try and DIY something...." I'm aware of all this. Like I said to u/patmorgan235, Yes I am aware it is older. Maybe we could replace all the older hardware, but the current administration in Washington has cut the grants and funding for massive amounts of money across the scientific research community, so we are trying to do more with less and sweating the gear longer than we normally would. I came here for actual suggestions from actual professionals, not to get shit on by people telling me to do what I clearly said I couldn't in the post. submitted by /u/WhatsUpB1tches [link] [comments]
- Very wild Monday, finally got done with the police and management.by /u/boomgoesthecat (Sysadmin) on April 22, 2025 at 4:24 pm
I work for a small MSP. Our main clients are small doctors offices, realtors and restaurants. Don't even get me started on the restaurants, i hate them to the core! But my Monday is not about them its about a realtors office. Monday morning i was tasked with backing up a users data / programs and restoring it to a new laptop they had ordered from us. Easy enough i thought i've likely done 100+ of these so far in my career. I'm working with a new helpdesk person this Monday was the start of his 3rd week. Fresh out of college. He's as green as green can be for a tech. Our lab area was full so we were working in an empty cube and had the laptop hooked up to a 26 inch monitor for better visibility. I went over the steps with our new guy and let him know the first thing to do was get a backup. Thankfully he's done a few so he didn't need my guidance during this part and i walked away for about 20 minutes. When i came back i found that the backup was only about 20% complete and i was expecting it to be finishing up or finished at this point. I asked if he had just started and was told no the laptop just has tons of data and the drive was 97% full. Ugh.. Ok. "Lets poke around and see if he's caching like 80GB of exchange email or something." We poked around and to our dismay a folder on the desktop was the culprit. 172GB folder with the name "Business and Work files" Looking back everything inside my brain should have been screaming at me not to open that folder but i had the tech open it anyway. Of course right as we opened it the owner of the company was walking right past and yeah..... Child pr0n, Gay Pr0n, i mean you name it. All with not just a file list but the view set to Extra large icons. All three of us got a eye searing look into the deepest darkest shit the internet had to offer before i could slam the laptop shut. Before i could even speak the owner said to us. "Both of you don't move. No one touch that laptop I'm going to call the police" The rest of the day was basically a blur of police interviews, between just regular cops that came first, a detective and later a forensic detective near the end of the day. This morning was a long management meeting about the incident and how the client in question is no longer a client and to forward any communication from them direct to our manager or the owner. The owner gave me and the new guy the rest of the day off and Wednesday paid to reflect. Basically just told us to take the time, have some fun and try and forget the incident. If any one has any questions i'll try and answer what i can. I haven't been told not to say anything other than not to name names / the companies involved. I'll try and answer what i can. submitted by /u/boomgoesthecat [link] [comments]
- Devops why are you guys so annoying and full of yourselves?by /u/tkyang99 (Everything DevOps) on April 22, 2025 at 4:11 pm
Lets have fun bashing those annoying devops and infra guys we have to deal with at work! No but seriously though, why do most of you act like gatekeepers who cant be bothered to do anything unless we beg you and arrogant jerks like you think the place will fall apart if not for your presence? submitted by /u/tkyang99 [link] [comments]
- Mickeysoft support - who is hiring these guys?by /u/Hassxm (Sysadmin) on April 22, 2025 at 4:00 pm
Raised an issue The tech rep is reading out the documentation over the phone - and understanding it himself for the first time............ I sent a detailed ticket in. Could they not skim read relevant info before calling and doing ummmm ahhhh over the telephone? It feels bizarre that I'm having to explain how certain products works. To the product support themselves If I'm being harsh - hit me with your criticism submitted by /u/Hassxm [link] [comments]
- Tech USB-key installed Windows 11 on a handful of machines not in compatible list. Why is that even allowed? Immediate concerns?by /u/jdlnewborn (Sysadmin) on April 22, 2025 at 3:23 pm
I recently discovered a few machines that had been staged and set up for users, despite supposedly being incompatible with Windows 11. I noticed this while reviewing the hardware specs of some remaining systems still running Windows 10. Strangely, I found identical brand/model units already operating on Windows 11. After looking into it, I realized one of the techs must have accidentally grabbed machines from the wrong batch (or mixed them up somehow) and went ahead with staging—using a USB key, new SSD, etc. I assumed some sort of workaround or “magic” had been used to get Windows 11 installed. But out of curiosity, we pulled another machine from the same batch (its serial number was just two off from one of the others), and surprisingly, there was nothing preventing a clean Windows 11 install. It updated fully and ran without issue. Is it just me, or is that unexpected? I do plan on phasing these systems out, but given this, I’ll likely prioritize replacing the remaining Windows 10 machines first. I know there's always the possibility that Microsoft could release an update that won’t install on unsupported hardware, but beyond that—are there any other risks I should be aware of? edit: to add, the machines are i5 7th gen Lenovo's submitted by /u/jdlnewborn [link] [comments]
- Best Android device management solution for MSPs?by /u/QFrozenAceQ (Sysadmin) on April 22, 2025 at 3:22 pm
Hey everyone, We’re an MSP that mainly supports Android devices across various client setups. We’re on the hunt for a better remote device management solution that simplifies how we handle everything from updates and app deployments to device security and access. One of our biggest challenges is restricting certain settings on client devices (like locking down network access or blocking app installs) while still being able to remotely monitor and secure everything from a single place. Jumping between different tools for every client is just not scalable. Would love to hear what’s working for other MSPs managing Android fleets. Anything that helped you centralize control and improve security? Appreciate the insights in advance submitted by /u/QFrozenAceQ [link] [comments]
- Our open source project got featured on DevOps Toolkit!by /u/ARandomShephard (Everything DevOps) on April 22, 2025 at 3:21 pm
DevOps Toolkit just did a video covering our open source project, mirrord. mirrord lets apps connect into a live K8s environment during development and “mirrors” traffic to a local process from a pod, so you can debug/iterate as if your service was live in the cluster! Here's the link if you’re curious: https://www.youtube.com/watch?v=NLa0K5mybzo submitted by /u/ARandomShephard [link] [comments]
Top 30 AWS Certified Developer Associate Exam Tips


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
Top 30 AWS Certified Developer Associate Exam Tips
AWS Certified Developer Associate Exam Prep Urls
Get the free app at: android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen
iOs: https://apps.apple.com/ca/app/aws-certified-developer-assoc/id1511211095
PRO version with mock exam android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen
PRO version with mock exam ios: https://apps.apple.com/ca/app/aws-certified-dev-ass-dva-c01/id1506519319t
0
Understand some basic code samples such as JSON code, config files, and IAM policy documents: Amazon EC2, Amazon Elastic Load Balancing, Amazon EC2 Auto Scaling, Amazon Simple Notification Service (SNS), AWS KMS, Amazon CloudTrail, AWS Organizations, Amazon Simple Workflow Service (SWF), and Amazon Virtual Private Cloud (VPC), Load Balancing, DynamoDB, EBS, Multi-AZ RDS, Aurora, EFS, DynamoDB, NLB, ALB, Aurora, Auto Scalling, DynamoDB(latency), Aurora(performance), Multi-AZ RDS(high availability), Throughput Optimized EBS (highly sequential), Read the quizlet note cards here
AWS topics for DVA-C01
1
What to study: LAMBDA [10-15% of Exam]Invocation types, Using notifications and event source mappings, Concurrency and throttling, X-Ray and Amazon SQS DLQs, Versions and aliases, Blue/green deployment, Packaging and deployment, VPC connections (with Internet/NAT GW), Lambda as ELB target, Dependencies, Environment variables (inc. encrypting them)
AWS topics for DVA-C01
2
What to study: DYNAMODB [10-12% of Exam]Scans vs queries (and the APIs, parameters you can use), Local and Global Secondary indexes, Calculating Read Capacity Units (RCUs) and Write, Capacity Units (WCUs), Performance / optimization best practices, Use cases (e.g. session state, key/value data store, scalability), DynamoDB Streams, Use in serverless app with Lambda and API Gateway, DynamoDB Accelerator (DAX) use cases
AWS topics for DVA-C01: DynamoDB
3
What to study: API Gateway [8-10% of Exam] Lambda / IAM / Cognito authorizers, Invalidation of cache, Integration types: proxy vs custom / AWS vs HTTP, Caching, Import / export OpenAPI Swagger specifications, Stage variables, Performance metrics,
AWS topics for DVA-C01: API Gateway
4
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
What to study: COGNITO [7-8% of Exam] User pools vs Identity pools, Unauthenticated identities, AWS Cognito Sync, Using MFA with Cognito, Web identity federation,
AWS topics for DVA-C01: COGNITO
5
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
What to study: S3 [7-8% of Exam]Encryption – make sure you understand S3 encryption very well for the exam, S3 Transfer Acceleration, Versioning, Copying data, Lifecycle rules,
AWS topics for DVA-C01
6
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
What to study: IAM IAM policies and roles, Cross account access, Multi-factor authentication (MFA), API calls, IAM Roles with EC2 (instance profiles), Access keys vs roles, IAM best practices, Federation,
AWS topics for DVA-C01: IAM
7
What to study: ECS Shared storage between containers, Single vs multi-docker environments, Uploading / downloading images with ECR, Placement strategies (e.g. spread, binpack, random etc.), Port mappings, Defining task definitions, IAM Roles for Tasks,
AWS topics for DVA-C01: ECS
8
What to study: ELASTIC BEANSTALK Deployment policies and blue/green, .ebextensions and config file usage, Updating deployments, Worker vs web tier, Deployment, packaging and files, code, commands used, Use cases,
AWS topics for DVA-C01: AMAZON ELASTIC BEANSTALK
9
What to study: CLOUDFORMATIONS CloudFormation template anatomy (e.g. mappings, outputs, parameters, etc.), Packaging and deployment including commands used, AWS Serverless Application Model (SAM),
AWS topics for DVA-C01
10
What to study: AMAZON CLOUDWATCH Monitoring application logs, Trigger scheduled Lambda invocation, Custom metrics, Metric resolution,
AWS topics for DVA-C01: AMAZON CLOUDWATCH
11
What to study: CODECOMMIT, CODEBUILD, CODEDEPLOY, CODEPIPELINE, CODESTAR Know how each tool fits into the CI/CD pipeline, Various files used such as appspec.yml, buildspec.yml etc., Process for packaging and deployment, Deployment types with CodeDeploy including different , destination services (e.g. Lambda, ECS, EC2), Manual approvals with CodePipeline,
AWS topics for DVA-C01: CODECOMMIT, CODEBUILD, CODEDEPLOY, CODEPIPELINE, CODESTAR
12
What to study: AMAZON CLOUDFRONT
AWS topics for DVA-C01: AMAZON CLOUDFRONT
13
What to study: X-RAYS X-Ray daemon, installing and configuring, Lambda with X-Ray, Use cases / benefits, Inclusion in Elastic Beanstalk environment, Annotations vs segments vs subsegments vs metadata, API calls, Port used (UDP 2000),
AWS topics for DVA-C01: X-RAYS
14
What to study: SQS Standard queues, FIFO, DLQ, delay queue, Decoupling applications use cases, Event source mapping to Lambda, Visibility timeout, Short polling vs long polling,
AWS topics for DVA-C01: SQS
15
What to study: ELASTICACHE Use cases (caching and session state), In-memory data store, Services it sits in front of (e.g. Amazon RDS), Comparison against DynamoDB DAX, Lazy loading vs Write Through Caching, Memcached vs Redis,
AWS topics for DVA-C01: ELASTICACHE
16
What to study: STEP FUNCTIONS Step Functions state machines, Using to coordinate multiple Lambda function invocations
AWS topics for DVA-C01: STEP FUNCTIONS
17
What to study: SSM PARAMETER STORE Storing credentials, Rotation (application needs to do it) – comparison with secrets manager (which handles rotation automatically),
SSM PARAMETER STORE
18
Know what instance types can be launched from which types of AMIs, and which instance types require an HVM AMI
AWS HVM AMI
19
Have a good understanding of how Route53 supports all of the different DNS record types, and when you would use certain ones over others.
Route 53 supports all of the different DNS record types
20
Know which services have native encryption at rest within the region, and which do not.
AWS Services with native Encryption at rest
21
Kinesis Sharding:
#AWS Kinesis Sharding
22
Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )
#AWS Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )
24
The Default Termination Policy for Auto Scaling Group (Oldest launch configuration vs Instance Protection)
#AWS Default Termination Policy for Auto Scaling Group
25
Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes.
#AWS Cheat Sheet
26
Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam.
#AWS Exam Prep Video
27
Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions.
#AWS Exam Prep Video
28
Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably.
#AWS Exam Prep Video
29
Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS.
#AWS Faqs
30
Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too.
#AWS Services
What is the AWS Certified Developer Associate Exam?
The AWS Certified Developer – Associate examination is intended for individuals who perform a development role and have one or more years of hands-on experience developing and maintaining an AWS-based application. It validates an examinee’s ability to:
- Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
- Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS
There are two types of questions on the examination:
- Multiple-choice: Has one correct response and three incorrect responses (distractors).
- Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.
To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
AWS Certified Developer Associate info and details
The AWS Certified Developer Associate Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:
- Certification Name: AWS Certified Developer Associate.
- Prerequisites for the Exam: None.
- Exam Pattern: Multiple Choice Questions
- Number of Questions: 65
- Duration: 130 mins
- Exam fees: US $150
- Exam Guide on AWS Website
- Available languages for tests: English, Japanese, Korean, Simplified Chinese
- Read AWS whitepapers
- Register for certification account here.
- Prepare for Certification Here
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS vs Azure vs Google
- Pros and Cons of Cloud Computing
- Cloud Customer Insurance – Cloud Provider Insurance – Cyber Insurance
Additional Information for reference
Below are some useful reference links that would help you to learn about AWS Practitioner Exam.
- AWS certified cloud practitioner/
- certification faqs
- AWS Certified Developer Associate Exam Prep Dumps
Other Relevant and Recommended AWS Certifications
AWS Certification Exams Roadmap[/caption]
- AWS Certified Cloud Practitioner
- AWS Certified Solution Architect – Associate
- AWS Certified Developer – Associate
- AWS Certified SysOps Administrator – Associate
- AWS Certified Developer – Professional
- AWS Certified DevOps Engineer – Professional
- AWS Certified Big Data Specialty
- AWS Certified Advanced Networking.
- AWS Certified Security – Specialty
AWS Developer Associate Exam Whitepapers:
AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.
Online Training and Labs for AWS Certified Developer Associate Exam
AWS Certified Developer Associate Jobs
What are the corresponding Azure and Google Cloud services for each of the AWS services?


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
What are the corresponding Azure and Google Cloud services for each of the AWS services?
What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?
For a better experience, use the mobile app here.


1
Category: Marketplace
Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions.
References:
[AWS]:AWS Marketplace
[Azure]:Azure Marketplace
[Google]:Google Cloud Marketplace
Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace
Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.
2
Category: AI and machine learning
A cloud service to train, deploy, automate, and manage machine learning models.
References:
[AWS]:AWS SageMaker(build, train and deploy machine learning models), AWS DeepComposer (ML enabled musical keyboard), Amazon Fraud Detector (Detect more online fraud faster), Amazon CodeGuru (Automate code reviews and identify expensive lines of code), Contact Lens for Amazon Connect (Contact center analytics powered by ML), Amazon Kendra (Reinvent enterprise search with ML), Amazon Augmented AI (Easily implement human review of ML predictions), Amazon SageMaker Studio (The first visual IDE for machine learning), Amazon SageMaker Notebooks (Quickly start and share ML notebooks), Amazon SageMaker Experiments (Organize, track, and evaluate ML experiments), Amazon SageMaker Debugger (Analyze and debug ML models in real time), Amazon SageMaker Autopilot (Automatically create high quality ML models), Amazon SageMaker Model Monitor (Continuously monitor ML models)
[Azure]:Azure Machine Learning
[Google]:Google Cloud TensorFlow
Tags: #AI, #CloudAI, #SageMaker, #AzureMachineLearning, #TensorFlow
Differences: According to the StackShare community, Azure Machine Learning has a broader approval, being mentioned in 12 company stacks & 8 developers stacks; compared to Amazon Machine Learning, which is listed in 8 company stacks and 9 developer stacks.
3
Category: AI and machine learning
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
References:
[AWS]:Alexa Skills Kit (enables a developer to build skills, also called conversational applications, on the Amazon Alexa artificial intelligence assistant.)
[Azure]:Microsoft Bot Framework (building enterprise-grade conversational AI experiences.)
[Google]:Google Assistant Actions ( developer platform that lets you create software to extend the functionality of the Google Assistant, Google’s virtual personal assistant,)
Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant
Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.
4
Category: AI and machine learning
Description:API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
References:
[AWS]:Amazon Lex (building conversational interfaces into any application using voice and text.)
[Azure]:Azure Speech Services(unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription)
[Google]:Google APi.ai, AI Hub (Hosted repo of plug-and-play AI component), AI building blocks(for developers to add sight, language, conversation, and structured data to their applications.), AI Platform(code-based data science development environment, lets ML developers and data scientists quickly take projects from ideation to deployment.), DialogFlow (Google-owned developer of human–computer interaction technologies based on natural language conversations. ), TensorFlow(Open Source Machine Learning platform)
Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow
Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).
5
Category: AI and machine learning
Description:Computer Vision: Extract information from images to categorize and process visual data.
References:
[AWS]:Amazon Rekognition (based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use.)
[Azure]:Cognitive Services(bring AI within reach of every developer—without requiring machine-learning expertise.)
[Google]:Google Vision (offers powerful pre-trained machine learning models through REST and RPC APIs.)
Tags: AmazonRekognition, #GoogleVision, #AzureSpeech
Differences: For now, only Google Cloud Vision supports batch processing. Videos are not natively supported by Google Cloud Vision or Amazon Rekognition. The Object Detection functionality of Google Cloud Vision and Amazon Rekognition is almost identical, both syntactically and semantically.
Differences:
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs.
6
Category: Big data and analytics: Data warehouse
Description:Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
References:
[AWS]:AWS Redshift (scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake.), Amazon Redshift Data Lake Export (Save query results in an open format),Amazon Redshift Federated Query(Run queries n line transactional data), Amazon Redshift RA3(Optimize costs with up to 3x better performance), AQUA: AQUA: Advanced Query Accelerator for Amazon Redshift (Power analytics with a new hardware-accelerated cache), UltraWarm for Amazon Elasticsearch Service(Store logs at ~1/10th the cost of existing storage tiers )
[Azure]:Azure Synapse formerly SQL Data Warehouse (limitless analytics service that brings together enterprise data warehousing and Big Data analytics.)
[Google]:BigQuery (RESTful web service that enables interactive analysis of massive datasets working in conjunction with Google Storage. )
Tags:#AWSRedshift, #GoogleBigQuery, #AzureSynapseAnalytics
Differences: Loading data, Managing resources (and hence pricing), Ecosystem. Ecosystem is where Redshift is clearly ahead of BigQuery. While BigQuery is an affordable, performant alternative to Redshift, they are considered to be more up and coming
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
7
Category: Big data and analytics: Data warehouse
Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization
References:
[AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch
[Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning
[Google]:Cloud DataProc, Machine Learning, Cloud Datalab
Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab
Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
8
Category: Virtual servers
Description:Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Batch: Run large-scale parallel and high-performance computing applications efficiently in the cloud.
References:
[AWS]:Elastic Compute Cloud (EC2), Amazon Bracket(Explore and experiment with quantum computing), Amazon Ec2 M6g Instances (Achieve up to 40% better price performance), Amazon Ec2 Inf1 instancs (Deliver cost-effective ML inference), AWS Graviton2 Processors (Optimize price performance for cloud workloads), AWS Batch, AWS AutoScaling, VMware Cloud on AWS, AWS Local Zones (Run low latency applications at the edge), AWS Wavelength (Deliver ultra-low latency applications for 5G devices), AWS Nitro Enclaves (Further protect highly sensitive data), AWS Outposts (Run AWS infrastructure and services on-premises)
[Azure]:Azure Virtual Machines, Azure Batch, Virtual Machine Scale Sets, Azure VMware by CloudSimple
[Google]:Compute Engine, Preemptible Virtual Machines, Managed instance groups (MIGs), Google Cloud VMware Solution by CloudSimple
Tags: #AWSEC2, #AWSBatch, #AWSAutoscaling, #AzureVirtualMachine, #AzureBatch, #VirtualMachineScaleSets, #AzureVMWare, #ComputeEngine, #MIGS, #VMWare
Differences: There is very little to choose between the 3 providers when it comes to virtual servers. Amazon has some impressive high end kit, on the face of it this sound like it would make AWS a clear winner. However, if your only option is to choose the biggest box available you will need to make sure you have very deep pockets, and perhaps your money may be better spent re-architecting your apps for horizontal scale.Azure’s remains very strong in the PaaS space and now has a IaaS that can genuinely compete with AWS
Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.
9
Category: Containers and container orchestrators
Description: A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments.
References:
[AWS]:EC2 Container Service (ECS), Fargate(Run containers without anaging servers or clusters), EC2 Container Registry(managed AWS Docker registry service that is secure, scalable, and reliable.), Elastic Container Service for Kubernetes (EKS: runs the Kubernetes management infrastructure across multiple AWS Availability Zones), App Mesh( application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure)
[Azure]:Azure Container Instances, Azure Container Registry, Azure Kubernetes Service (AKS), Service Fabric Mesh
[Google]:Google Container Engine, Container Registry, Kubernetes Engine
Tags:#ECS, #Fargate, #EKS, #AppMesh, #ContainerEngine, #ContainerRegistry, #AKS
Differences: Google Container Engine, AWS Container Services, and Azure Container Instances can be used to run docker containers. Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.
10
Category: Serverless
Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
References:
[AWS]:AWS Lambda
[Azure]:Azure Functions
[Google]:Google Cloud Functions
Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions
Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.
11
Category: Relational databases
Description: Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
References:
[AWS]:AWS RDS(MySQL and PostgreSQL-compatible relational database built for the cloud,), Aurora(MySQL and PostgreSQL-compatible relational database built for the cloud)
[Azure]:SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL
[Google]:Cloud SQL
Tags: #AWSRDS, #AWSAUrora, #AzureSQlDatabase, #AzureDatabaseforMySQL, #GoogleCloudSQL
Differences: All three providers boast impressive relational database offering. RDS supports an impressive range of managed relational stores while Azure SQL Database is probably the most advanced managed relational database available today. Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.
12
Category: NoSQL, Document Databases
Description:A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
References:
[AWS]:DynamoDB (key-value and document database that delivers single-digit millisecond performance at any scale.), SimpleDB ( a simple web services interface to create and store multiple data sets, query your data easily, and return the results.), Managed Cassandra Services(MCS)
[Azure]:Table Storage, DocumentDB, Azure Cosmos DB
[Google]:Cloud Datastore (handles sharding and replication in order to provide you with a highly available and consistent database. )
Tags:#AWSDynamoDB, #SimpleDB, #TableSTorage, #DocumentDB, AzureCosmosDB, #GoogleCloudDataStore
Differences:DynamoDB and Cloud Datastore are based on the document store database model and are therefore similar in nature to open-source solutions MongoDB and CouchDB. In other words, each database is fundamentally a key-value store. With more workloads moving to the cloud the need for NoSQL databases will become ever more important, and again all providers have a good range of options to satisfy most performance/cost requirements. Of all the NoSQL products on offer it’s hard not to be impressed by DocumentDB; Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.
13
Category:Caching
Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
References:
[AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.)
[Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.)
[Google]:Memcache (In-memory key-value store, originally intended for caching)
Tags:#Redis, #Memcached
<Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.
14
Category: Security, identity, and access
Description:Authentication and authorization: Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
References:
[AWS]:Identity and Access Management (IAM), AWS Organizations, Multi-Factor Authentication, AWS Directory Service, Cognito(provides solutions to control access to backend resources from your app), Amazon Detective (Investigate potential security issues), AWS IAM Access Analyzer(Easily analyze resource accessibility)
[Azure]:Azure Active Directory, Azure Subscription Management + Azure RBAC, Multi-Factor Authentication, Azure Active Directory Domain Services, Azure Active Directory B2C, Azure Policy, Management Groups
[Google]:Cloud Identity, Identity Platform, Cloud IAM, Policy Intelligence, Cloud Resource Manager, Cloud Identity-Aware Proxy, Context-aware accessManaged Service for Microsoft Active Directory, Security key enforcement, Titan Security Key
Tags: #IAM, #AWSIAM, #AzureIAM, #GoogleIAM, #Multi-factorAuthentication
Differences: One unique thing about AWS IAM is that accounts created in the organization (not through federation) can only be used within that organization. This contrasts with Google and Microsoft. On the good side, every organization is self-contained. On the bad side, users can end up with multiple sets of credentials they need to manage to access different organizations. The second unique element is that every user can have a non-interactive account by creating and using access keys, an interactive account by enabling console access, or both. (Side note: To use the CLI, you need to have access keys generated.)
15
Category: Object Storage and Content delivery
Description:Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export(used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport.), Snowball( petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud), CloudFront( content delivery network (CDN) is massively scaled and globally distributed), Elastic Block Store (EBS: high performance block storage service), Elastic File System(shared, elastic file storage system that grows and shrinks as you add and remove files.), S3 Infrequent Access (IA: is for data that is accessed less frequently, but requires rapid access when needed. ), S3 Glacier( long-term storage of data that is infrequently accessed and for which retrieval latency times of 3 to 5 hours are acceptable.), AWS Backup( makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.), Storage Gateway(hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage), AWS Import/Export Disk(accelerates moving large amounts of data into and out of AWS using portable storage devices for transport)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:Source: All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.
16
Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT (Internet of Things), AWS Greengrass, Kinesis Firehose, Kinesis Streams, AWS IoT Things Graph
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.
17
Category:Web Applications
Description:Managed hosting platform providing easy to use services for deploying and scaling web applications and services. API Gateway is a a turnkey solution for publishing APIs to external and internal consumers. Cloudfront is a global content delivery network that delivers audio, video, applications, images, and other files.
References:
[AWS]:Elastic Beanstalk (for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS), AWS Wavelength (for delivering ultra-low latency applications for 5G), API Gateway (makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.), CloudFront (web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations.),Global Accelerator ( improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.)AWS AppSync (simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources: GraphQL service with real-time data synchronization and offline programming features. )
[Azure]:App Service, API Management, Azure Content Delivery Network, Azure Content Delivery Network
[Google]:App Engine, Cloud API, Cloud Enpoint, APIGee
Tags: #AWSElasticBeanstalk, #AzureAppService, #GoogleAppEngine, #CloudEnpoint, #CloudFront, #APIgee
Differences: With AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using Elastic Beanstalk’s management capabilities. AWS Elastic Beanstalk integrates with more apps than Google App Engines (Datadog, Jenkins, Docker, Slack, Github, Eclipse, etc..). Google App Engine has more features than AWS Elastic BEanstalk (App Identity, Java runtime, Datastore, Blobstore, Images, Go Runtime, etc..). Developers describe Amazon API Gateway as “Create, publish, maintain, monitor, and secure APIs at any scale”. Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. On the other hand, Google Cloud Endpoints is detailed as “Develop, deploy and manage APIs on any Google Cloud backend”. An NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API development and provides insight with Google Cloud Monitoring, Cloud Trace, Google Cloud Logging and Cloud Trace.
18
Category:Encryption
Description:Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
References:
[AWS]:Key Management Service AWS KMS, CloudHSM
[Azure]:Key Vault
[Google]:Encryption By Default at Rest, Cloud KMS
Tags:#AWSKMS, #Encryption, #CloudHSM, #EncryptionAtRest, #CloudKMS
Differences: AWS KMS, is an ideal solution for organizations that want to manage encryption keys in conjunction with other AWS services. In contrast to AWS CloudHSM, AWS KMS provides a complete set of tools to manage encryption keys, develop applications and integrate with other AWS services. Google and Azure offer 4096 RSA. AWS and Google offer 256 bit AES. With AWs, you can bring your own key
19
Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT, AWS Greengrass, Kinesis Firehose ( captures and loads streaming data in storage and business intelligence (BI) tools to enable near real-time analytics in the AWS cloud), Kinesis Streams (for rapid and continuous data intake and aggregation.), AWS IoT Things Graph (makes it easy to visually connect different devices and web services to build IoT applications.)
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.
20
Category:Object Storage and Content delivery
Description: Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export Snowball, CloudFront, Elastic Block Store (EBS), Elastic File System, S3 Infrequent Access (IA), S3 Glacier, AWS Backup, Storage Gateway, AWS Import/Export Disk, Amazon S3 Access Points(Easily manage access for shared data)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.
21
Category: Backend process logic
Description: Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
References:
[AWS]:AWS Step Functions ( lets you build visual workflows that enable fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code.)
[Azure]:Logic Apps (cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.)
[Google]:Dataflow ( fully managed service for executing Apache Beam pipelines within the Google Cloud Platform ecosystem.)
Tags:#AWSStepFunctions, #LogicApps, #Dataflow
Differences: AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. AWS Step Functions belongs to \”Cloud Task Management\” category of the tech stack, while Google Cloud Dataflow can be primarily classified under \”Real-time Data Processing\”. According to the StackShare community, Google Cloud Dataflow has a broader approval, being mentioned in 32 company stacks & 8 developers stacks; compared to AWS Step Functions, which is listed in 19 company stacks and 7 developer stacks.
22
Category: Enterprise application services
Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
References:
[AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index)
[Azure]:Office 365
[Google]:G Suite
Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite
Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC
23
Category: Networking
Description: Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
References:
[AWS]:Virtual Private Cloud (VPC), Cloud virtual networking, Subnets, Elastic Network Interface (ENI), Route Tables, Network ACL, Secutity Groups, Internet Gateway, NAT Gateway, AWS VPN Gateway, AWS Route 53, AWS Direct Connect, AWS Network Load Balancer, VPN CloudHub, AWS Local Zones, AWS Transit Gateway network manager (Centrally manage global networks)
[Azure]:Virtual Network(provide services for building networks within Azure.),Subnets (network resources can be grouped by subnet for organisation and security.), Network Interface (Each virtual machine can be assigned one or more network interfaces (NICs)), Network Security Groups (NSG: contains a set of prioritised ACL rules that explicitly grant or deny access), Azure VPN Gateway ( allows connectivity to on-premise networks), Azure DNS, Traffic Manager (DNS based traffic routing solution.), ExpressRoute (provides connections up to 10 Gbps to Azure services over a dedicated fibre connection), Azure Load Balancer, Network Peering, Azure Stack (Azure Stack allows organisations to use Azure services running in private data centers.), Azure Load Balancer , Azure Log Analytics, Azure DNS,
[Google]:Cloud Virtual Network, Subnets, Network Interface, Protocol fowarding, Cloud VPN, Cloud DNS, Virtual Private Network, Cloud Interconnect, CDN interconnect, Cloud DNS, Stackdriver, Google Cloud Load Balancing,
Tags:#VPC, #Subnets, #ACL, #VPNGateway, #CloudVPN, #NetworkInterface, #ENI, #RouteTables, #NSG, #NetworkACL, #InternetGateway, #NatGateway, #ExpressRoute, #CloudInterConnect, #StackDriver
Differences: Subnets group related resources, however, unlike AWS and Azure, Google do not constrain the private IP address ranges of subnets to the address space of the parent network. Like Azure, Google has a built in internet gateway that can be specified from routing rules.
24
Category: Management
Description: A unified management console that simplifies building, deploying, and operating your cloud resources.
References:
[AWS]: AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources)
[Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health
[Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver
Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter
Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.
25
Category: DevOps and application monitoring
Description: Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments; Cloud services for collaborating on code development; Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services; Fully managed build service that supports continuous integration and deployment.
References:
[AWS]:AWS CodePipeline(orchestrates workflow for continuous integration, continuous delivery, and continuous deployment), AWS CloudWatch (monitor your AWS resources and the applications you run on AWS in real time. ), AWS X-Ray (application performance management service that enables a developer to analyze and debug applications in aws), AWS CodeDeploy (automates code deployments to Elastic Compute Cloud (EC2) and on-premises servers. ), AWS CodeCommit ( source code storage and version-control service), AWS Developer Tools, AWS CodeBuild (continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. ), AWS Command Line Interface (unified tool to manage your AWS services), AWS OpsWorks (Chef-based), AWS CloudFormation ( provides a common language for you to describe and provision all the infrastructure resources in your cloud environment.), Amazon CodeGuru (for automated code reviews and application performance recommendations)
[Azure]:Azure Monitor, Azure DevOps, Azure Developer Tools, Azure CLI Azure PowerShell, Azure Automation, Azure Resource Manager , VM extensions , Azure Automation
[Google]:DevOps Solutions (Infrastructure as code, Configuration management, Secrets management, Serverless computing, Continuous delivery, Continuous integration , Stackdriver (combines metrics, logs, and metadata from all of your cloud accounts and projects into a single comprehensive view of your environment)
Tags: #CloudWatch, #StackDriver, #AzureMonitor, #AWSXray, #AWSCodeDeploy, #AzureDevOps, #GoogleDevopsSolutions
Differences: CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. Azure DevOps provides unlimited private Git hosting, cloud build for continuous integration, agile planning, and release management for continuous delivery to the cloud and on-premises. Includes broad IDE support.
SageMaker | Azure Machine Learning Studio
A collaborative, drag-and-drop tool to build, test, and deploy predictive analytics solutions on your data.
Alexa Skills Kit | Microsoft Bot Framework
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
Amazon Lex | Language Understanding (LUIS)
Allows your applications to understand user commands contextually.
Amazon Polly, Amazon Transcribe | Azure Speech Services
Enables both Speech to Text, and Text into Speech capabilities.
The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs.
Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.
Amazon Rekognition | Cognitive Services
Computer Vision: Extract information from images to categorize and process visual data.
Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.
Face: Detect, identy, and analyze faces in photos.
Emotions: Recognize emotions in images.
Alexa Skill Set | Azure Virtual Assistant
The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.
Big data and analytics
Data warehouse
AWS Redshift | SQL Data Warehouse
Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
Big data processing EMR | Azure Databricks
Apache Spark-based analytics platform.
Managed Hadoop service. Deploy and manage Hadoop clusters in Azure.
Data orchestration / ETL
AWS Data Pipeline, AWS Glue | Data Factory
Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.
A fully managed service that serves as a system of registration and system of discovery for enterprise data sources
Analytics and visualization
AWS Kinesis Analytics | Stream Analytics
Data Lake Analytics | Data Lake Store
Storage and analysis platforms that create insights from large quantities of data, or data that originates from many sources.
Business intelligence tools that build visualizations, perform ad hoc analysis, and develop business insights from data.
Delivers full-text search and related search analytics and capabilities.
Amazon Athena | Azure Data Lake Analytics
Provides a serverless interactive query service that uses standard SQL for analyzing databases.
Compute
Virtual servers
Elastic Compute Cloud (EC2) | Azure Virtual Machines
Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Run large-scale parallel and high-performance computing applications efficiently in the cloud.
AWS Auto Scaling | Virtual Machine Scale Sets
Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.
VMware Cloud on AWS | Azure VMware by CloudSimple
Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.
Containers and container orchestrators
EC2 Container Service (ECS), Fargate | Azure Container Instances
Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.
EC2 Container Registry | Azure Container Registry
Allows customers to store Docker formatted images. Used to create all types of container deployments on Azure.
Elastic Container Service for Kubernetes (EKS) | Azure Kubernetes Service (AKS)
Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.
App Mesh | Service Fabric Mesh
Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking.
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.
Serverless
Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code
Database
Relational database
AWS RDS | SQL Database Azure Database for MySQL Azure Database for PostgreSQL
Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.
NoSQL / Document
DynamoDB and SimpleDB | Azure Cosmos DB
A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
Caching
AWS ElastiCache | Azure Cache for Redis
An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.
Database migration
AWS Database Migration Service | Azure Database Migration Service
Migration of database schema and data from one database format to a specific database technology in the cloud.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
DevOps and application monitoring
AWS CloudWatch, AWS X-Ray | Azure Monitor
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.
AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.
AWS CodeDeploy, AWS CodeCommit, AWS CodePipeline | Azure DevOps
A cloud service for collaborating on code development.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.
AWS Developer Tools | Azure Developer Tools
Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services.
The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.
AWS CodeBuild | Azure DevOps
Fully managed build service that supports continuous integration and deployment.
AWS Command Line Interface | Azure CLI Azure PowerShell
Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
AWS OpsWorks (Chef-based) | Azure Automation
Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources.
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.
AWS CloudFormation | Azure Resource Manager , VM extensions , Azure Automation
Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks.
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.
Networking
Area
Cloud virtual networking, Virtual Private Cloud (VPC) | Virtual Network
Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
Cross-premises connectivity
AWS VPN Gateway | Azure VPN Gateway
Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).
DNS management
AWS Route 53 | Azure DNS
Manage your DNS records using the same credentials and billing and support contract as your other Azure services
Route 53 | Traffic Manager
A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.
Dedicated network
AWS Direct Connect | ExpressRoute
Establishes a dedicated, private network connection from a location to the cloud provider (not over the Internet).
Load balancing
AWS Network Load Balancer | Azure Load Balancer
Azure Load Balancer load-balances traffic at layer 4 (TCP or UDP).
Application Load Balancer | Application Gateway
Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.
Internet of things (IoT)
AWS IoT | Azure IoT Hub
A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale.
AWS Greengrass | Azure IoT Edge
Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
Kinesis Firehose, Kinesis Streams | Event Hubs
Services that allow the mass ingestion of small data inputs, typically from devices and sensors, to process and route the data.
AWS IoT Things Graph | Azure Digital Twins
Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.
Management
Trusted Advisor | Azure Advisor
Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.
AWS Usage and Billing Report | Azure Billing API
Services to help generate, monitor, forecast, and share billing data for resource usage by time, organization, or product resources.
AWS Management Console | Azure portal
A unified management console that simplifies building, deploying, and operating your cloud resources.
AWS Application Discovery Service | Azure Migrate
Assesses on-premises workloads for migration to Azure, performs performance-based sizing, and provides cost estimations.
Amazon EC2 Systems Manager | Azure Monitor
Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
AWS Personal Health Dashboard | Azure Resource Health
Provides detailed information about the health of resources as well as recommended actions for maintaining resource health.
Security, identity, and access
Authentication and authorization
Identity and Access Management (IAM) | Azure Active Directory
Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
Identity and Access Management (IAM) | Azure Role Based Access Control
Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
AWS Organizations | Azure Subscription Management + Azure RBAC
Security policy and role management for working with multiple accounts.
Multi-Factor Authentication | Multi-Factor Authentication
Safeguard access to data and applications while meeting user demand for a simple sign-in process.
AWS Directory Service | Azure Active Directory Domain Services
Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.
Cognito | Azure Active Directory B2C
A highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities.
AWS Organizations | Azure Policy
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.
AWS Organizations | Management Groups
Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.
Encryption
Server-side encryption with Amazon S3 Key Management Service | Azure Storage Service Encryption
Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
Key Management Service AWS KMS, CloudHSM | Key Vault
Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).
Firewall
Web Application Firewall | Application Gateway – Web Application Firewall
A firewall that protects web applications from common web exploits.
Web Application Firewall | Azure Firewall
Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.
Security
Inspector | Security Center
An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.
Certificate Manager | App Service Certificates available on the Portal
Service that allows customers to create, manage, and consume certificates seamlessly in the cloud.
GuardDuty | Azure Advanced Threat Protection
Detect and investigate advanced attacks on-premises and in the cloud.
AWS Artifact | Service Trust Portal
Provides access to audit reports, compliance guides, and trust documents from across cloud services.
AWS Shield | Azure DDos Protection Service
Provides cloud services with protection from distributed denial of services (DDoS) attacks.
Storage
Object storage
Simple Storage Services (S3) | Azure Blob storage
Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
Virtual server disks
Elastic Block Store (EBS) | Azure managed disks
SSD storage optimized for I/O intensive read/write operations. For use as high-performance Azure virtual machine storage.
Shared files
Elastic File System | Azure Files
Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.
Archiving and backup
S3 Infrequent Access (IA) | Azure Storage cool tier
Cool storage is a lower-cost tier for storing data that is infrequently accessed and long-lived.
S3 Glacier | Azure Storage archive access tier
Archive storage has the lowest storage cost and higher data retrieval costs compared to hot and cool storage.
AWS Backup | Azure Backup
Back up and recover files and folders from the cloud, and provide offsite protection against data loss.
Hybrid storage
Storage Gateway | StorSimple
Integrates on-premises IT environments with cloud storage. Automates data management and storage, plus supports disaster recovery.
Bulk data transfer
AWS Import/Export Disk | Import/Export
A data transport solution that uses secure disks and appliances to transfer large amounts of data. Also offers data protection during transit.
AWS Import/Export Snowball, Snowball Edge, Snowmobile | Azure Data Box
Petabyte- to exabyte-scale data transport solution that uses secure data storage devices to transfer large amounts of data to and from Azure.
Web applications
Elastic Beanstalk | App Service
Managed hosting platform providing easy to use services for deploying and scaling web applications and services.
API Gateway | API Management
A turnkey solution for publishing APIs to external and internal consumers.
CloudFront | Azure Content Delivery Network
A global content delivery network that delivers audio, video, applications, images, and other files.
Global Accelerator | Azure Front Door
Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.
Miscellaneous
Backend process logic
AWS Step Functions | Logic Apps
Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
Enterprise application services
Amazon WorkMail, Amazon WorkDocs | Office 365
Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
Gaming
GameLift, GameSparks | PlayFab
Managed services for hosting dedicated game servers.
Media transcoding
Elastic Transcoder | Media Services
Services that offer broadcast-quality video streaming services, including various transcoding technologies.
Workflow
Simple Workflow Service (SWF) | Logic Apps
Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.
Hybrid
Outposts | Azure Stack
Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.
How does a business decide between Microsoft Azure or AWS?
Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).
Some of the main things it comes down to is compute options, pricing, and purchasing options.
Here’s a brief comparison of the compute option features across cloud providers:
Here’s an example of a few instances’ costs (all are Linux OS):
Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.
Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.
Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.
With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.
You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post
Source: AWS to Azure services comparison – Azure Architecture
AWS Certification Exams Prep: Serverless Facts and Summaries and Question and Answers


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
AWS Serverless – Facts and summaries, Top 20 AWS Serverless Questions and Answers Dump
Definition 1: Serverless computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It can be a form of utility computing.
Definition 2: AWS Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.
AWS Serverless Facts and summaries
- The AWS Serverless Application Model (AWS SAM) is a model to define serverless applications. AWS SAM is natively supported by AWS CloudFormation and provides a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.
- You can use AWS CodePipeline with the AWS Serverless Application Model to automate building, testing, and deploying serverless applications. AWS CodeBuild integrates with CodePipeline to provide automated builds. You can use AWS CodeDeploy to gradually roll out and test new Lambda function versions.
- You can monitor and troubleshoot the performance of your serverless applications and AWS Lambda functions with AWS services and third-party tools. Amazon CloudWatch helps you see real-time reporting metrics and logs for your serverless applications. You can use AWS X-Ray to debug and trace your serverless applications and AWS Lambda.
- The AWS Serverless Application Repository is a managed repository for serverless applications. It enables teams, organizations, and individual developers to store and share reusable applications, and easily assemble and deploy serverless architectures in powerful new ways. Using the Serverless Application Repository, you don’t need to clone, build, package, or publish source code to AWS before deploying it. Instead, you can use pre-built applications from the Serverless Application Repository in your serverless architectures, helping you and your teams reduce duplicated work, ensure organizational best practices, and get to market faster.
- Anyone with an AWS account can publish a serverless application to the Serverless Application Repository. Applications can be privately shared with specific AWS accounts. Applications that are shared publicly include a link to the application’s source code so others can view what the application does and how it works.
- What kinds of applications are available in the AWS Serverless Application Repository? The AWS Serverless Application Repository includes applications for Alexa Skills, chatbots, data processing, IoT, real time stream processing, web and mobile back-ends, social media trend analysis, image resizing, and more from publishers on AWS.
- The AWS Serverless Application Repository enables developers to publish serverless applications developed in a GitHub repository. Using AWS CodePipeline to link a GitHub source with the AWS Serverless Application Repository can make the publishing process even easier, and the process can be set up in minutes.
- What two arguments does a Python Lambda handler function require?
Event, Context - A Lambda deployment package contains Function code and libraries not included within the runtime environment
- When referencing the remaining time left for a Lambda function to run within the function’s code you would use The context object.
- Long-running memory-intensive workloads is LEAST suited to AWS Lambda
- The maximum execution duration of your Lambda functions is Fifteen Minutes
- Logs for Lambda functions are Stored in AWS CloudWatch
- Docker Container Images are constructed using instructions in a file called Dockerfile
- The ECS Task Agent Is responsible for starting and stopping tasks. It runs inside the EC2 instance and reports on information like running tasks and resource utilization
- AWS ECR Stores Container Images.
- Elastic Beanstalk is used to Deploy and scale web applications and services developed with a supported platform
- When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?
An Elastic Load Balancer, an S3 Bucket, an Ec2 instance. - When using Elastic Beanstalk you can deploy your web applications by:
- Configuring a git repository with Elastic Beanstalk so that changes will be detected and your application will be updated.
- Uploading code files to the Elastic Beanstalk service
Top
Reference: AWS Serverless
AWS LAMBDA EXPLAINED GRAPHICALLY:
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
AWS Serverless: Top 20 Questions and Answers Dump
Q00: You have created a serverless application which converts text in to speech using a combination of S3, API Gateway, Lambda, Polly, DynamoDB and SNS. Your users complain that only some text is being converted, whereas longer amounts of text does not get converted. What could be the cause of this problem?
- A. Polly has built in censorship, so if you try and send it text that is deemed offensive, it will not generate an MP3.
- B. You’ve placed your DynamoDB table in a single availability zone, which is currently down, causing an outage.
- C. Your lambda function needs a longer execution time. You should check how long is needed in the fringe cases and increase the timeout inside the function to slightly longer than that.
- D. AWS X-ray service is interfering with the application and should be disabled.
Q1: How does API Gateway deal with legacy SOAP applications?
- A. Converts the response from the application to REST
- B. Converts the response from the application to HTML
- C. Provides webservice passthrough for SOAP applications
- D. Converts the response from the application to XML
Q3: You have launched a new web application on AWS using API Gateway, Lambda and S3. Someone post a thread to reddit about your application and it starts to go viral.
Your start receiving 100000 requests every second and you notice that most requests are similar.
Your web application begins to struggle. What can you do to optimize performance of your application?
- A. Enable API Gateway Accelerator
- B. Enable API Gateway caching to cache frequent requests.
- C. Change your route53 allias record to point to AWS Neptune and then configure Neptune to filter your API requests to genuine requests only.
- D. Migrate your API Gateway to an Network Load Balancer and enable session stickiness for all sessions.
Q4: Which of the following services does X-ray integrate with? (Choose 3)
- A. Elastic Load Balancer
- B. Lambda
- C. S3
- D. API Gateway
Q5: You are a developer for a busy real estate company and you want to enable other real estate agents to the
ability to show properties on your books, but skinned so that it looks like their own website. You decide the most efficient way to do this is to
expose your API to the public. The project works well, however one of your competitors starts abusing this, sending your API tens of thousands
of requests per second. This generates a HTTP 429 error. Each agent connects to your API using individual API Keys. What actions can you take to stop this behavior?
- A. Use AWS Shield Advanced API protection to block the requests.
- B. Deploy multiple API Gateways and give the agent access to another API Gateway.
- C. Place an AWS Web Application Firewall in front of API gateway and filter requests.
- D. Throttle the agents API access using the individual API Keys
Q6: You are developing a new application using serverless infrastructure and are using services such as S3, DynamoDB, Lambda, API Gateway, CloudFront, CloudFormation and Polly.
You deploy your application to production and your end users begin complaining about receiving a HTTP 429 error. What could be the cause of the error?
- A. You enabled API throttling for a rate limit of 1000 requests per second while in development and now that you have deployed to production your API Gateway is being throttled.
- B. Your cloudFormation stack is not valid and is failling to deploy properly which is causing a HTTP 429 error.
- C. Your lambda function does not have sufficient permissions to read to DynamoDB and this is generating a HTTP 429 error.
- D. You have an S3 bucket policy which is preventing lambda from being able to write tyo your bucket, generating a HTTP 429 error.
Q7: What is the format of structured notification messages sent by Amazon SNS?
- A. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other
values - B. An JSON object containing MessageId, DuplicateFlag, Message and other values
- C. An XML object containing MessageId, DuplicateFlag, Message and other values
- D. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other
values
Top
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS Lambda facts and summaries and Questions and Answers Dump
- AWS SQS facts and summaries and Questions and Answers Dump
- AWS RDS facts and summaries and Questions and Answers Dump
- AWS ECS facts and summaries and Questions and Answers Dump
- AWS CloudWatch facts and summaries and Questions and Answers Dump
- AWS SES facts and summaries and Questions and Answers Dump
- AWS EBS facts and summaries and Questions and Answers Dump
- AWS ELB facts and summaries and Questions and Answers Dump
- AWS Autoscaling facts and summaries and Questions and Answers Dump
- AWS VPC facts and summaries and Questions and Answers Dump
- AWS KMS facts and summaries and Questions and Answers Dump
- AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
- AWS CodeBuild facts and summaries and Questions and Answers Dump
- AWS CodeDeploy facts and summaries and Questions and Answers Dump
- AWS CodePipeline facts and summaries and Questions and Answers Dump
AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers
AWS Developer – Deployment Theory Facts and summaries, Top 80 AWS Developer DVA-C02 Theory Questions and Answers Dump
Definition 1: The AWS Developer is responsible for designing, deploying, and developing cloud applications on AWS platform
Definition 2: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.
The AWS Certified Developer Associate certification is a widely recognized certification that validates a candidate’s expertise in developing and maintaining applications on the Amazon Web Services (AWS) platform.
The certification is about to undergo a major change with the introduction of the new exam version DVA-C02, replacing the current DVA-C01. In this article, we will discuss the differences between the two exams and what candidates should consider in terms of preparation for the new DVA-C02 exam.
Quick facts
- What’s happening?
- The DVA-C01 exam is being replaced by the DVA-C02 exam.
- When is this taking place?
- The last day to take the current exam is February 27th, 2023 and the first day to take the new exam is February 28th, 2023.
- What’s the difference?
- The new exam features some new AWS services and features.
Main differences between DVA-C01 and DVA-C02
The table below details the differences between the DVA-C01 and DVA-C02 exams domains and weightings:
In terms of the exam content weightings, the DVA-C02 exam places a greater emphasis on deployment and management, with a slightly reduced emphasis on development and refactoring. This shift reflects the increased importance of operations and management in cloud computing, as well as the need for developers to have a strong understanding of how to deploy and maintain applications on the AWS platform.
One major difference between the two exams is the focus on the latest AWS services and features. The DVA-C02 exam covers around 57 services vs only 33 services in the DVA-C01. This reflects the rapidly evolving AWS ecosystem and the need for developers to be up-to-date with the latest services and features in order to effectively build and maintain applications on the platform.
Click the image above to watch our video about the NEW AWS Developer Associate Exam DVA-C02 from our youtube channel
Training resources for the AWS Developer Associate
In terms of preparation for the DVA-C02 exam, we strongly recommend enrolling in our on-demand training courses for the AWS Developer Associate certification. It is important for candidates to familiarize themselves with the latest AWS services and features, as well as the updated exam content weightings. Practical experience working with AWS services and hands-on experimentation with new services and features will be key to success on the exam. Candidates should also focus on their understanding of security best practices, access control, and compliance, as these topics will carry a greater weight in the new exam.
Frequently asked questions – FAQs:
In conclusion, the change from the DVA-C01 to the DVA-C02 exam represents a major shift in the focus and content of the AWS Certified Developer Associate certification. Candidates preparing for the new exam should focus on familiarizing themselves with the latest AWS services and features, as well as the updated exam content weightings, and placing a strong emphasis on security, governance, and compliance.
With the right preparation and focus, candidates can successfully navigate the changes in the DVA-C02 exam and maintain their status as a certified AWS Developer Associate.
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
Download AWS Certified Developer Associate Mock Exam Pro App for:
All Platforms (PWA) – Android – iOS – Windows 10 –
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
AWS Developer and Deployment Theory Facts and summaries
- Continuous Integration is about integrating or merging the code changes frequently, at least once per day. It enables multiple devs to work on the same application.
- Continuous delivery is all about automating the build, test, and deployment functions.
- Continuous Deployment fully automates the entire release process, code is deployed into Production as soon as it has successfully passed through the release pipeline.
- AWS CodePipeline is a continuous integration/Continuous delivery service:
- It automates your end-to-end software release process based on user defines workflow
- It can be configured to automatically trigger your pipeline as soon as a change is detected in your source code repository
- It integrates with other services from AWS like CodeBuild and CodeDeploy, as well as third party custom plug-ins.
- AWS CodeBuild is a fully managed build service. It can build source code, run tests and produce software packages based on commands that you define yourself.
- Dy default the buildspec.yml defines the build commands and settings used by CodeBuild to run your build.
- AWS CodeDeploy is a fully managed automated deployment service and can be used as part of a Continuous Delivery or Continuous Deployment process.
- There are 2 types of deployment approach:
- In-place or Rolling update- you stop the application on each host and deploy the latest code. EC2 and on premise systems only. To roll back, you must re-deploy the previous version of the application.
- Blue/Green : New instances are provisioned and the new application is deployed to these new instances. Traffic is routed to the new instances according to your own schedule. Supported for EC2, on-premise systems and Lambda functions. Rollback is easy, just route the traffic back to the original instances. Blue is active deployment, green is new release.
- Docker allows you to package your software into Containers which you can run in Elastic Container Service (ECS)
- A docker Container includes everything the software needs to run including code, libraries, runtime and environment variables etc..
- A special file called Dockerfile is used to specify the instructions needed to assemble your Docker image.
- Once built, Docker images can be stored in Elastic Container Registry (ECR) and ECS can then use the image to launch Docker Containers.
- AWS CodeCommit is based on Git. It provides centralized repositories for all your code, binaries, images, and libraries.
- CodeCommit tracks and manages code changes. It maintains version history.
- CodeCommit manages updates from multiple sources and enables collaboration.
- To support CORS, API resource needs to implement an OPTIONS method that can respond to the OPTIONS preflight request with following headers:
- You have a legacy application that works via XML messages. You need to place the application behind the API gateway in order for customers to make API calls. Which of the following would you need to configure?
You will need to work with the Request and Response Data mapping. - Your application currently points to several Lambda functions in AWS. A change is being made to one of the Lambda functions. You need to ensure that application traffic is shifted slowly from one Lambda function to the other. Which of the following steps would you carry out?
- Create an ALIAS with the –routing-config parameter
- Update the ALIAS with the –routing-config parameter
- AWS CodeDeploy: The AppSpec file defines all the parameters needed for the deployment e.g. location of application files and pre/post deployment validation tests to run.
- For Ec2 / On Premise systems, the appspec.yml file must be placed in the root directory of your revision (the same folder that contains your application code). Written in YAML.
- For Lambda and ECS deployment, the AppSpec file can be YAML or JSON
- Visual workflows are automatically created when working with which Step Functions
- API Gateway stages store configuration for deployment. An API Gateway Stage refers to A snapshot of your API
- AWS SWF Services SWF guarantees delivery order of messages/tasks
- Blue/Green Deployments with CodeDeploy on AWS Lambda can happen in multiple ways. Which of these is a potential option? Linear, All at once, Canary
- X-Ray Filter Expressions allow you to search through request information using characteristics like URL Paths, Trace ID, Annotations
- S3 has eventual consistency for overwrite PUTS and DELETES.
- What can you do to ensure the most recent version of your Lambda functions is in CodeDeploy?
Specify the version to be deployed in AppSpec file.
Top
Reference: AWS Developer Tools
AWS Developer and Deployment Theory: Top 80 Questions and Answers Dump
Q0: Which AWS service can be used to compile source code, run tests and package code?
- A. CodePipeline
- B. CodeCommit
- C. CodeBuild
- D. CodeDeploy
TopQ1: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)
- A. Set the Rollback on failure radio button to No in the CloudFormation console
- B. Set Termination Protection to Enabled in the CloudFormation console
- C. Use the –disable-rollback flag with the AWS CLI
- D. Use the –enable-termination-protection protection flag with the AWS CLI
Q2: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?
- A. Continuous Integration
- B. Continuous Deployment
- C. Continuous Delivery
- D. Continuous Development
Q3: When deploying application code to EC2, the AppSpec file can be written in which language?
- A. JSON
- B. JSON or YAML
- C. XML
- D. YAML
Q4: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?
- A. CloudFormation will rollback only the failed components
- B. CloudFormation will rollback the entire stack
- C. Failed component will remain available for debugging purposes
- D. CloudFormation will ask you if you want to continue with the deployment
Q5: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?
- A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
- B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
- C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
- D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.
Q6: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries
- A. CodeCommit
- B. CodeBuild
- C. CodePipeline
- D. ElasticFileSystem
Q7: You are using CloudFormation to create a new S3 bucket,
which of the following sections would you use to define the properties of your bucket?- A. Conditions
- B. Parameters
- C. Outputs
- D. Resources
Q8: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template
would you use to define these?
- A. Transforms
- B. Outputs
- C. Resources
- D. Instances
Q9: Which AWS service can be used to fully automate your entire release process?
- A. CodeDeploy
- B. CodePipeline
- C. CodeCommit
- D. CodeBuild
Q10: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?
- A. Outputs
- B. Transforms
- C. Resources
- D. Exports
Q11: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?
- A. Inputs
- B. Resources
- C. Transforms
- D. Files
Q12: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
used to specify source files and lifecycle hooks?
- A. buildspec.yml
- B. appspec.json
- C. appspec.yml
- D. buildspec.json
Q13: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?
- A. Share the code using an EBS volume
- B. Copy and paste the code into the template each time you need to use it
- C. Use a cloudformation nested stack
- D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.
Q14: In the CodeDeploy AppSpec file, what are hooks used for?
- A. To reference AWS resources that will be used during the deployment
- B. Hooks are reserved for future use
- C. To specify files you want to copy during the deployment.
- D. To specify, scripts or function that you want to run at set points in the deployment lifecycle
Q15:You need to setup a RESTful API service in AWS that would be serviced via the following url https://democompany.com/customers Which of the following combination of services can be used for development and hosting of the RESTful service? Choose 2 answers from the options below
- A. AWS Lambda and AWS API gateway
- B. AWS S3 and Cloudfront
- C. AWS EC2 and AWS Elastic Load Balancer
- D. AWS SQS and Cloudfront
Q16: As a developer, you have created a Lambda function that is used to work with a bucket in Amazon S3. The Lambda function is not working as expected. You need to debug the issue and understand what’s the underlying issue. How can you accomplish this in an easily understandable way?
- A. Use AWS Cloudwatch metrics
- B. Put logging statements in your code
- C. Set the Lambda function debugging level to verbose
- D. Use AWS Cloudtrail logs
Q17: You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this?
- A. Use AWS Cloudwatch metrics
- B. Assign a dead letter queue
- C. Congure SNS notications
- D. Use AWS Cloudtrail logs
Q18: You are developing an application that is going to make use of Amazon Kinesis. Due to the high throughput , you decide to have multiple shards for the streams. Which of the following is TRUE when it comes to processing data across multiple shards?
- A. You cannot guarantee the order of data across multiple shards. Its possible only within a shard
- B. Order of data is possible across all shards in a streams
- C. Order of data is not possible at all in Kinesis streams
- D. You need to use Kinesis firehose to guarantee the order of data
Q19: You’ve developed a Lambda function and are now in the process of debugging it. You add the necessary print statements in the code to assist in the debugging. You go to Cloudwatch logs , but you see no logs for the lambda function. Which of the following could be the underlying issue for this?
- A. You’ve not enabled versioning for the Lambda function
- B. The IAM Role assigned to the Lambda function does not have the necessary permission to create Logs
- C. There is not enough memory assigned to the function
- D. There is not enough time assigned to the function
Q20: Your application is developed to pick up metrics from several servers and push them off to Cloudwatch. At times , the application gets client 429 errors. Which of the following can be done from the programming side to resolve such errors?
- A. Use the AWS CLI instead of the SDK to push the metrics
- B. Ensure that all metrics have a timestamp before sending them across
- C. Use exponential backoff in your request
- D. Enable encryption for the requests
Q21: You have been instructed to use the CodePipeline service for the CI/CD automation in your company. Due to security reasons , the resources that would be part of the deployment are placed in another account. Which of the following steps need to be carried out to accomplish this deployment? Choose 2 answers from the options given below
- A. Dene a customer master key in KMS
- B. Create a reference Code Pipeline instance in the other account
- C. Add a cross account role
- D. Embed the access keys in the codepipeline process
Q22: You are planning on deploying an application to the worker role in Elastic Beanstalk. Moreover, this worker application is going to run the periodic tasks. Which of the following is a must have as part of the deployment?
- A. An appspec.yaml file
- B. A cron.yaml file
- C. A cron.cong file
- D. An appspec.json file
Q23: An application needs to make use of an SQS queue for working with messages. An SQS queue has been created with the default settings. The application needs 60 seconds to process each message. Which of the following step need to be carried out by the application.
- A. Change the VisibilityTimeout for each message and then delete the message after processing is completed
- B. Delete the message and change the visibility timeout.
- C. Process the message , change the visibility timeout. Delete the message
- D. Process the message and delete the message
Q24: AWS CodeDeploy deployment fails to start & generate following error code, ”HEALTH_CONSTRAINTS_INVALID”, Which of the following can be used to eliminate this error?
- A. Make sure the minimum number of healthy instances is equal to the total number of instances in the deployment group.
- B. Increase the number of healthy instances required during deployment
- C. Reduce number of healthy instances required during deployment
- D. Make sure the number of healthy instances is equal to the specified minimum number of healthy instances.
Q25: How are the state machines in AWS Step Functions defined?
- A. SAML
- B. XML
- C. YAML
- D. JSON
Q26:How can API Gateway methods be configured to respond to requests?
- A. Forwarded to method handlers
- B. AWS Lambda
- C. Integrated with other AWS Services
- D. Existing HTTP endpoints
Q27: Which of the following could be an example of an API Gateway Resource URL for a trucks resource?
- A. https://1a2sb3c4.execute-api.us-east-1.awsapigateway.com/trucks
- B. https://trucks.1a2sb3c4.execute-api.us-east-1.amazonaws.com
- C. https://1a2sb3c4.execute-api.amazonaws.com/trucks
- D. https://1a2sb3c4.execute-api.us-east-1.amazonaws.com/cars
Q28: API Gateway Deployments are:
- A. A specific snapshot of your API’s methods
- B. A specific snapshot of all of your API’s settings, resources, and methods
- C. A specific snapshot of your API’s resources
- D. A specific snapshot of your API’s resources and methods
Q29: A SWF workflow task or task execution can live up to how long?
- A. 1 Year
- B. 14 days
- C. 24 hours
- D. 3 days
Q30: With AWS Step Functions, all the work in your state machine is done by tasks. These tasks performs work by using what types of things? (Choose the best 3 answers)
- A. An AWS Lambda Function Integration
- B. Passing parameters to API actions of other services
- C. Activities
- D. An EC2 Integration
Q31: How does SWF make decisions?
- A. A decider program that is written in the language of the developer’s choice
- B. A visual workflow created in the SWF visual workflow editor
- C. A JSON-defined state machine that contains states within it to select the next step to take
- D. SWF outsources all decisions to human deciders through the AWS Mechanical Turk service.
Q32: In order to effectively build and test your code, AWS CodeBuild allows you to:
- A. Select and use some 3rd party providers to run tests against your code
- B. Select a pre-configured environment
- C. Provide your own custom AMI
- D. Provide your own custom container image
Q33: X-Ray Filter Expressions allow you to search through request information using characteristics like:
- A. URL Paths
- B. Metadata
- C. Trace ID
- D. Annotations
Q34: CodePipeline pipelines are workflows that deal with stages, actions, transitions, and artifacts. Which of the following statements is true about these concepts?
- A. Stages contain at least two actions
- B. Artifacts are never modified or iterated on when used inside of CodePipeline
- C. Stages contain at least one action
- D. Actions will have a deployment artifact as either an input an output or both
Q35: When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?
- A. An Elastic Load Balancer
- B. An S3 Bucket
- C. A Lambda Function
- D. An EC2 instance
Q36: Elastic Beanstalk is used to:
- A. Deploy and scale web applications and services developed with a supported platform
- B. Deploy and scale serverless applications
- C. Deploy and scale applications based purely on EC2 instances
- D. Manage the deployment of all AWS infrastructure resources of your AWS applications
Q35: How can AWS X-Ray determine what data to collect?
- A. X-Ray applies a sampling algorithm by default
- B. X-Ray collects data on all requests by default
- C. You can implement your own sampling frequencies for data collection
- D. X-Ray collects data on all requests for services enabled with it
Q37: Which API call is used to list all resources that belong to a CloudFormation Stack?
- A. DescribeStacks
- B. GetTemplate
- C. DescribeStackResources
- D. ListStackResources
Q38: What is the default behaviour of a CloudFormation stack if the creation of one resource fails?
- A. Rollback
- B. The stack continues creating and the failed resource is ignored
- C. Delete
- D. Undo
Q39: Which AWS CLI command lists all current stacks in your CloudFormation service?
- A. aws cloudformation describe-stacks
- B. aws cloudformation list-stacks
- C. aws cloudformation create-stack
- D. aws cloudformation describe-stack-resources
Q40:
Which API call is used to list all resources that belong to a CloudFormation Stack?
- A. DescribeStacks
- B. GetTemplate
- C. ListStackResources
- D. DescribeStackResources
Q41: How does using ElastiCache help to improve database performance?
- A. It can store petabytes of data
- B. It provides faster internet speeds
- C. It can store the results of frequent or highly-taxing queries
- D. It uses read replicas
Q42: Which of the following best describes the Lazy Loading caching strategy?
- A. Every time the underlying database is written to or updated the cache is updated with the new information.
- B. Every miss to the cache is counted and when a specific number is reached a full copy of the database is migrated to the cache
- C. A specific amount of time is set before the data in the cache is marked as expired. After expiration, a request for expired data will be made through to the backing database.
- D. Data is added to the cache when a cache miss occurs (when there is no data in the cache and the request must go to the database for that data)
Q43: What are two benefits of using RDS read replicas?
- A. You can add/remove read replicas based on demand, so it creates elasticity for RDS.
- B. Improves performance of the primary database by taking workload from it
- C. Automatic failover in the case of Availability Zone service failures
- D. Allows both reads and writes
Q44: What is the simplest way to enable an S3 bucket to be able to send messages to your SNS topic?
- A. Attach an IAM role to the S3 bucket to send messages to SNS.
- B. Activate the S3 pipeline feature to send notifications to another AWS service – in this case select SNS.
- C. Add a resource-based access control policy on the SNS topic.
- D. Use AWS Lambda to receive events from the S3 bucket and then use the Publish API action to send them to the SNS topic.
Q45: You have just set up a push notification service to send a message to an app installed on a device with the Apple Push Notification Service. It seems to work fine. You now want to send a message to an app installed on devices for multiple platforms, those being the Apple Push Notification Service(APNS) and Google Cloud Messaging for Android (GCM). What do you need to do first for this to be successful?
- A. Request Credentials from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher
- B. Create a Platform Application Object which will connect all of the mobile devices with your app to the correct SNS topic.
- C. Request a Token from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher.
- D. Get a set of credentials in order to be able to connect to the push notification service you are trying to setup.
Q46: SNS message can be sent to different kinds of endpoints. Which of these is NOT currently a supported endpoint?
- A. Slack Messages
- B. SMS (text message)
- C. HTTP/HTTPS
- D. AWS Lambda
Q47: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number empty responses?
- A. Set the imaging queue VisibilityTimeout attribute to 20 seconds
- B. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
- C. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds
- D. Set the DelaySeconds parameter of a message to 20 seconds
Q48: Which of the following statements about SQS standard queues are true?
- A. Message order can be indeterminate – you’re not guaranteed to get messages in the same order they were sent in
- B. Messages will be delivered exactly once and messages will be delivered in First in, First out order
- C. Messages will be delivered exactly once and message delivery order is indeterminate
- D. Messages can be delivered one or more times
Q49: Which of the following is true if long polling is enabled?
- A. If long polling is enabled, then each poll only polls a subset of SQS servers; in order for all messages to be received, polling must continuously occur
- B. The reader will listen to the queue until timeout
- C. Increases costs because each request lasts longer
- D. The reader will listen to the queue until a message is available or until timeout
Q50: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?
- A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
- B. Permanently assigning users to specific instances and always routing their traffic to those instances
- C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
- D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance
Q51: When requested through an STS API call, credentials are returned with what three components?
- A. Security Token, Access Key ID, Signed URL
- B. Security Token, Access Key ID, Secret Access Key
- C. Signed URL, Security Token, Username
- D. Security Token, Secret Access Key, Personal Pin Code
Q52: Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week.
How can you securely provide credentials that allow your application to write to the queue?- A. Have the application fetch an access key from an Amazon S3 bucket at run time.
- B. Launch the application’s Amazon EC2 instance with an IAM role.
- C. Encrypt an access key in the application source code.
- D. Enroll the instance in an Active Directory domain and use AD authentication.
TopQ53: Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn’t overwrite a simultaneous change from another process.
How can you ensure concurrency?- A. Implement optimistic concurrency by using a conditional write.
- B. Implement pessimistic concurrency by using a conditional write.
- C. Implement optimistic concurrency by locking the item upon read.
- D. Implement pessimistic concurrency by locking the item upon read.
TopQ54: Which statements about DynamoDB are true? Choose 2 answers
- A. DynamoDB uses optimistic concurrency control
- B. DynamoDB restricts item access during writes
- C. DynamoDB uses a pessimistic locking model
- D. DynamoDB restricts item access during reads
- E. DynamoDB uses conditional writes for consistency
TopQ55: Your CloudFormation template has the following Mappings section:
Which JSON snippet will result in the value “ami-6411e20d” when a stack is launched in us-east-1?
- A. { “Fn::FindInMap” : [ “Mappings”, { “RegionMap” : [“us-east-1”, “us-west-1”] }, “32”]}
- B. { “Fn::FindInMap” : [ “Mappings”, { “Ref” : “AWS::Region” }, “32”]}
- C. { “Fn::FindInMap” : [ “RegionMap”, { “Ref” : “AWS::Region” }, “32”]}
- D. { “Fn::FindInMap” : [ “RegionMap”, { “RegionMap” : “AWS::Region” }, “32”]}
TopQ56: Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.
What is an appropriate way to code this?
- A. Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.
- B. Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.
- C. Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.
- D. Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.
Q57: You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets.
When you call the database tier from your app tier instances, you receive a timeout error. What could be causing this?- A. The IAM role associated with the app tier instances does not have rights to the MySQL database.
- B. The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app
instances. - C. The Amazon RDS database instance does not have a public IP address.
- D. There is no route defined between the app tier and the database tier in the Amazon VPC.
Q58: What type of block cipher does Amazon S3 offer for server side encryption?
- A. RC5
- B. Blowfish
- C. Triple DES
- D. Advanced Encryption Standard
Q59: You have written an application that uses the Elastic Load Balancing service to spread
traffic to several web servers Your users complain that they are sometimes forced to login
again in the middle of using your application, after they have already togged in. This is not
behaviour you have designed. What is a possible solution to prevent this happening?- A. Use instance memory to save session state.
- B. Use instance storage to save session state.
- C. Use EBS to save session state
- D. Use ElastiCache to save session state.
- E. Use Glacier to save session slate.
Q60: You are writing to a DynamoDB table and receive the following exception:”
ProvisionedThroughputExceededException”. though according to your Cloudwatch metrics
for the table, you are not exceeding your provisioned throughput. What could be an
explanation for this?- A. You haven’t provisioned enough DynamoDB storage instances
- B. You’re exceeding your capacity on a particular Range Key
- C. You’re exceeding your capacity on a particular Hash Key
- D. You’re exceeding your capacity on a particular Sort Key
- E. You haven’t configured DynamoDB Auto Scaling triggers
Q61: Which DynamoDB limits can be raised by contacting AWS support?
- A. The number of hash keys per account
- B. The maximum storage used per account
- C. The number of tables per account
- D. The number of local secondary indexes per account
- E. The number of provisioned throughput units per account
TopQ62: AWS CodeBuild allows you to compile your source code, run unit tests, and produce deployment artifacts by:
- A. Allowing you to provide an Amazon Machine Image to take these actions within
- B. Allowing you to select an Amazon Machine Image and provide a User Data bootstrapping script to prepare an instance to take these actions within
- C. Allowing you to provide a container image to take these actions within
- D. Allowing you to select from pre-configured environments to take these actions within
TopQ63: Which of the following will not cause a CloudFormation stack deployment to rollback?
- A. The template contains invalid JSON syntax
- B. An AMI specified in the template exists in a different region than the one in which the stack is being deployed.
- C. A subnet specified in the template does not exist
- D. The template specifies an instance-store backed AMI and an incompatible EC2 instance type.
TopQ64: Your team is using CodeDeploy to deploy an application which uses secure parameters that are stored in the AWS System Mangers Parameter Store. What two options below must be completed so CodeDeploy can deploy the application?
- A. Use ssm get-parameters with –with-decryption option
- B. Add permissions using AWS access keys
- C. Add permissions using AWS IAM role
- D. Use ssm get-parameters with –with-no-decryption option
TopQ65: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)
- A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
- B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
- C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
- D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
TopQ66:
A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers
How would you authenticate to the application given these details? (Choose 2)- A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
- B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
- C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
- D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.
TopQ67: When users are signing in to your application using Cognito, what do you need to do to make sure if the user has compromised credentials, they must enter a new password?
- A. Create a user pool in Cognito
- B. Block use for “Compromised credential” in the Basic security section
- C. Block use for “Compromised credential” in the Advanced security section
- D. Use secure remote password
Q68: You work in a large enterprise that is currently evaluating options to migrate your 27 GB Subversion code base. Which of the following options is the best choice for your organization?
- A. AWS CodeHost
- B. AWS CodeCommit
- C. AWS CodeStart
- D. None of these
Q69: You are on a development team and you need to migrate your Spring Application over to AWS. Your team is looking to build, modify, and test new versions of the application. What AWS services could help you migrate your app?
- A. Elastic Beanstalk
- B. SQS
- C. Ec2
- D. AWS CodeDeploy
Q70:
You are a developer responsible for managing a high volume API running in your company’s datacenter. You have been asked to implement a similar API, but one that has potentially higher volume. And you must do it in the most cost effective way, using as few services and components as possible. The API stores and fetches data from a key value store. Which services could you utilize in AWS?- A. DynamoDB
- B. Lambda
- C. API Gateway
- D. EC2
Q71: By default, what event occurs if your CloudFormation receives an error during creation?
- A. DELETE_IN_PROGRESS
- B. CREATION_IN_PROGRESS
- C. DELETE_COMPLETE
- D. ROLLBACK_IN_PROGRESS
TopQ72:
AWS X-Ray was recently implemented inside of a service that you work on. Several weeks later, after a new marketing push, that service started seeing a large spike in traffic and you’ve been tasked with investigating a few issues that have started coming up but when you review the X-Ray data you can’t find enough information to draw conclusions so you decide to:- A. Start passing in the X-Amzn-Trace-Id: True HTTP header from your upstream requests
- B. Refactor the service to include additional calls to the X-Ray API using an AWS SDK
- C. Update the sampling algorithm to increase the sample rate and instrument X-Ray to collect more pertinent information
- D. Update your application to use the custom API Gateway TRACE method to send in data
Q74: X-Ray metadata:
- A. Associates request data with a particular Trace-ID
- B. Stores key-value pairs of any type that are not searchable
- C. Collects at the service layer to provide information on the overall health of the system
- D. Stores key-value pairs of searchable information
TopQ75: Which of the following is the right sequence that gets called in CodeDeploy when you use Lambda hooks in an EC2/On-Premise Deployment?
- A. Before Install-AfterInstall-Validate Service-Application Start
- B. Before Install-After-Install-Application Stop-Application Start
- C. Before Install-Application Stop-Validate Service-Application Start
- D. Application Stop-Before Install-After Install-Application Start
Q76:
Describe the process of registering a mobile device with SNS push notification service using GCM.- A. Receive Registration ID and token for each mobile device. Then, register the mobile application with Amazon SNS, and pass the GCM token credentials to Amazon SNS
- B. Pass device token to SNS to create mobile subscription endpoint for each mobile device, then request the device token from each mobile device. SNS then communicates on your behalf to the GCM service
- C. None of these are correct
- D. Submit GCM notification credentials to Amazon SNS, then receive the Registration ID for each mobile device. After that, pass the device token to SNS, and SNS then creates a mobile subscription endpoint for each device and communicates with the GCM service on your behalf
Q77:
You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?- A. Store photos on an EBS volume of the web server.
- B. Block the IPs of the offending websites in Security Groups.
- C. Remove public read access and use signed URLs with expiry dates.
- D. Use CloudFront distributions for static content.
Q78: How can you control access to the API Gateway in your environment?
- A. Cognito User Pools
- B. Lambda Authorizers
- C. API Methods
- D. API Stages
Q79: What kind of message does SNS send to endpoints?
- A. An XML document with parameters like Message, Source, Destination, Type
- B. A JSON document with parameters like Message, Signature, Subject, Type.
- C. An XML document with parameters like Message, Signature, Subject, Type
- D. A JSON document with parameters like Message, Source, Destination, Type
Q80: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?
- A. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds.
- B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds.
- C. Set the imaging queue VisibilityTimeout attribute to 20 seconds.
- D. Set the DelaySeconds parameter of a message to 20 seconds.
Top81: You’re using CloudFormation templates to build out staging environments. What section of the CloudFormation would you edit in order to allow the user to specify the PEM key-name at start time?
- A. Resources Section
- B. Parameters Section
- C. Mappings Section
- D. Declaration Section
TopQ82: You are writing an AWS CloudFormation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?
- A. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description
- B. You can use intrinsic functions in any part of a template.
- C. You can use intrinsic functions only in the resource properties part of a template.
- D. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes.
Top
Other AWS Facts and Summaries and Questions/Answers Dump
- AWS S3 facts and summaries and Q&A Dump
- AWS DynamoDB facts and summaries and Questions and Answers Dump
- AWS EC2 facts and summaries and Questions and Answers Dump
- AWS Serverless facts and summaries and Questions and Answers Dump
- AWS Developer and Deployment Theory facts and summaries and Questions and Answers Dump
- AWS IAM facts and summaries and Questions and Answers Dump
- AWS Lambda facts and summaries and Questions and Answers Dump
- AWS SQS facts and summaries and Questions and Answers Dump
- AWS RDS facts and summaries and Questions and Answers Dump
- AWS ECS facts and summaries and Questions and Answers Dump
- AWS CloudWatch facts and summaries and Questions and Answers Dump
- AWS SES facts and summaries and Questions and Answers Dump
- AWS EBS facts and summaries and Questions and Answers Dump
- AWS ELB facts and summaries and Questions and Answers Dump
- AWS Autoscaling facts and summaries and Questions and Answers Dump
- AWS VPC facts and summaries and Questions and Answers Dump
- AWS KMS facts and summaries and Questions and Answers Dump
- AWS Elastic Beanstalk facts and summaries and Questions and Answers Dump
- AWS CodeBuild facts and summaries and Questions and Answers Dump
- AWS CodeDeploy facts and summaries and Questions and Answers Dump
- AWS CodePipeline facts and summaries and Questions and Answers Dump
AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers
AWS S3 Facts and summaries, AWS S3 Top 10 Questions and Answers Dump
Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
AWS S3 Explained graphically:
AWS S3 Facts and summaries
- S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
- S3 is object based: i.e allows you to upload files.
- Files can be from 0 Bytes to 5 TB
- What is the maximum length, in bytes, of a DynamoDB range primary key attribute value?
The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes). - S3 has unlimited storage.
- Files are stored in Buckets.
- Read after write consistency for PUTS of new Objects
- Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
- S3 Storage Classes/Tiers:
- S3 Standard (durable, immediately available, frequently accesses)
- Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
- S3 Standard-Infrequent Access – S3 Standard-IA (durable, immediately available, infrequently accessed)
- S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
- S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
- Glacier – Archived data, where you can wait 3-5 hours before accessing
You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.
- The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
- Core fundamentals of an S3 object
- Key (name)
- Value (data)
- Version (ID)
- Metadata
- Sub-resources (used to manage bucket-specific configuration)
- Bucket Policies, ACLs,
- CORS
- Transfer Acceleration
- Object-based storage only for files
- Not suitable to install OS on.
- Successful uploads will generate a HTTP 200 status code.
- S3 Security – Summary
- By default, all newly created buckets are PRIVATE.
- You can set up access control to your buckets using:
- Bucket Policies – Applied at the bucket level
- Access Control Lists – Applied at an object level.
- S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
- S3 Encryption
- Encryption In-Transit (SSL/TLS)
- Encryption At Rest:
- Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
- Client Side Encryption
- Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
- S3 CORS (Cross Origin Resource Sharing):
CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.- Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
- Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
- S3 CloudFront:
- Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
- Objects are cached for the life of the TTL (Time to Live)
- You can clear cached objects, but you will be charged. (Invalidation)
- S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
- GET-Intensive Workloads – Use Cloudfront
- Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
- mybucket/7eh4-2019-03-04-15-00-00/cust1234234/photo1.jpg
- mybucket/h35d-2019-03-04-15-00-00/cust1234234/photo2.jpg
- mybucket/o3n6-2019-03-04-15-00-00/cust1234234/photo3.jpg
- The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
- You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
- Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
- What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
- You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen?
All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes) - S3 bucket policies require a Principal be defined. Review the access policy elements here
- What checksums does Amazon S3 employ to detect data corruption?
Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
AWS S3 Top 10 Questions and Answers Dump
Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?
- A. Create multiple threads and upload the objects in the multiple threads
- B. Write the items in batches for better performance
- C. Use the Multipart upload API
- D. Enable versioning on the Bucket
Top
Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?
- A. AWS::Serverless::Api
- B. AWS::Serverless::Application
- C. AWS::Serverless::Layerversion
- D. AWS::Serverless::Function
Top
Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?
- A. Enable versioning for the underlying S3 bucket.
- B. Enable Replication so that the objects get replicated to the other bucket
- C. Enable CORS for the bucket
- D. Change the Bucket policy for the bucket to allow access from the other bucket
Top
Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below
- A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
- B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
- C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
- D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.
Top
Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?
- A. Bucket Policies are Written in JSON and ACLs are written in XML
- B. ACLs can be attached to S3 objects or S3 Buckets
- C. Bucket Policies and ACLs are written in JSON
- D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects
Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?
- A. Introduce random prefixes to S3 objects
- B. Introduce random suffixes to S3 objects
- C. Setup CloudFront for S3 objects
- D. Migrate commonly used objects to Amazon Glacier
Q7: If an application is storing hourly log files from thousands of instances from a high traffic
web site, which naming scheme would give optimal performance on S3?
- A. Sequential
- B. HH-DD-MM-YYYY-log_instanceID
- C. YYYY-MM-DD-HH-log_instanceID
- D. instanceID_log-HH-DD-MM-YYYY
- E. instanceID_log-YYYY-MM-DD-HH
Top
Q8: You are working with the S3 API and receive an error message: 409 Conflict. What is the possible cause of this error
- A. You’re attempting to remove a bucket without emptying the contents of the bucket first.
- B. You’re attempting to upload an object to the bucket that is greater than 5TB in size.
- C. Your request does not contain the proper metadata.
- D. Amazon S3 is having internal issues.
Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?
- A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
- B. There will be no problems, all three sites should work.
- C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
- D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases
Q10: Which of the following is NOT a common S3 API call?
- A. UploadPart
- B. ReadObject
- C. PutObject
- D. DownloadBucket
Other AWS Facts and Summaries
- AWS S3 facts and summaries
- AWS DynamoDB facts and summaries
- AWS EC2 facts and summaries
- AWS Lambda facts and summaries
- AWS SQS facts and summaries
- AWS RDS facts and summaries
- AWS ECS facts and summaries
- AWS CloudWatch facts and summaries
- AWS SES facts and summaries
- AWS EBS facts and summaries
- AWS Serverless facts and summaries
- AWS ELB facts and summaries
- AWS Autoscaling facts and summaries
- AWS VPC facts and summaries
- AWS KMS facts and summaries
- AWS Elastic Beanstalk facts and summaries
- AWS CodeBuild facts and summaries
- AWS CodeDeploy facts and summaries
- AWS CodePipeline facts and summaries
Latest DevOps and SysAdmin Feed


Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
DevOps is a set of practices and tools that organizations use to accelerate software development and improve the quality of their software products. It aims to bring development and operations teams together, so they can work more collaboratively and efficiently to deliver software faster and with fewer errors.
The goal of DevOps is to automate as much of the software delivery process as possible, using tools such as continuous integration, continuous delivery, and infrastructure as code. This allows teams to move faster and release new features and bug fixes more frequently, while also reducing the risk of errors and downtime.
DevOps also emphasizes the importance of monitoring, logging, and testing to ensure that software is performing well in production. By continuously monitoring and analyzing performance data, teams can quickly identify and resolve any issues that arise.
In summary, DevOps is a combination of people, processes, and technology that organizations use to improve their software delivery capabilities, increase efficiency, and reduce risk.
What is a System Administrator?
DevOps: In IT world, DevOps means Development Operations. The DevOps is the bridge between the developers, the servers and the infrastructure and his main role is to automate the process of delivering code to operations.
DevOps on wikipedia: is a software development process that emphasizes communication and collaboration between product management, software development, and operations professionals. DevOps also automates the process of software integration, testing, deployment and infrastructure changes.[1][2] It aims to establish a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.
DevOps Latest Feeds
DevOps Resources
What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.
Watch a video or find out more here.
Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.
Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.
Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.
Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.
Google Workspace Business Standard Promotion code for the Americas
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
Email me for more promo codes
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech

Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....

List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

Health Health, a science-based community to discuss human health
- 4-month-old baby dies after parents repeatedly rubbed alcohol on her gums, police sayby /u/Sandstorm400 on April 22, 2025 at 7:16 pm
submitted by /u/Sandstorm400 [link] [comments]
- Kennedy set to announce ban on artificial food dyesby /u/apokrif1 on April 22, 2025 at 3:46 pm
submitted by /u/apokrif1 [link] [comments]
- Over half of adults could be overweight by 2050. Why weight loss drugs aren't a cureby /u/grh55 on April 22, 2025 at 3:42 pm
submitted by /u/grh55 [link] [comments]
- Canada: Teen cannabis use increased after legalizationby /u/CTVNEWS on April 22, 2025 at 3:09 pm
submitted by /u/CTVNEWS [link] [comments]
- Ice Bucket Challenge returns with focus on mental healthby /u/CTVNEWS on April 22, 2025 at 3:08 pm
submitted by /u/CTVNEWS [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL about Fish Doorbell, a Dutch livestream of a dam that allows viewers to click a bell to notify fish are ready to pass throughby /u/ycr007 on April 22, 2025 at 10:08 pm
submitted by /u/ycr007 [link] [comments]
- TIL about the Iron Ring Ceremony, where Canadian engineers are given a ring worn on the pinkie to remind them of their professional responsibility for safety. It began after two bridges collapsed in 1907 and 1912.by /u/wurl3y on April 22, 2025 at 9:33 pm
submitted by /u/wurl3y [link] [comments]
- TIL that Frida Kahlo had an affair with Leon Trotsky and painted a self-portrait for him, which she almost destroyed after his assassinationby /u/thatoneguyfromva on April 22, 2025 at 9:09 pm
submitted by /u/thatoneguyfromva [link] [comments]
- TIL that Dirk Willems, a 16th-century Dutch Anabaptist, escaped prison but turned back to save the guard chasing him who fell through ice—an act of mercy that led to his recapture and execution.by /u/MarzipanBackground91 on April 22, 2025 at 8:29 pm
submitted by /u/MarzipanBackground91 [link] [comments]
- TIL that in the Sundarbans mangrove swamps of Bangladesh and India, tigers kill up to fifty people a year, and the widows of men killed by tigers face cultural and religious ostracization, and are viewed as bad omens. Many are excluded from society to the point of having their children taken away.by /u/paleocacher on April 22, 2025 at 6:32 pm
submitted by /u/paleocacher [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- Scalable microwave-to-optical converters at the single-photon levelby /u/man_centaur_duality on April 22, 2025 at 8:19 pm
submitted by /u/man_centaur_duality [link] [comments]
- Recent projections suggest that large geographical areas will soon experience heat and humidity exceeding limits for human thermoregulation - The study found that humans struggle to thermoregulate at wet bulb temperatures above 26–31 °C, significantly below the commonly cited 35 °C threshold.by /u/-Mystica- on April 22, 2025 at 6:53 pm
submitted by /u/-Mystica- [link] [comments]
- Insects are disappearing at an alarming rate worldwide. Insect populations had declined by 75% in less than three decades. The most cited driver for insect decline was agricultural intensification, via issues like land-use change and insecticides, with 500+ other interconnected drivers.by /u/mvea on April 22, 2025 at 5:38 pm
submitted by /u/mvea [link] [comments]
- A study of more than 3,000 adolescents showed that those who went to bed the earliest, slept the longest, and had the lowest sleeping heart rates outperformed others on reading, vocabulary, problem solving and other mental tests.by /u/Wagamaga on April 22, 2025 at 5:11 pm
submitted by /u/Wagamaga [link] [comments]
- U-Michigan study: A single dose of a psychedelic compound enhanced flexible learning in mice for weeks, offering insight into long-lasting brain changes that may help treat depression, PTSD, and Alzheimer'sby /u/umichnews on April 22, 2025 at 4:17 pm
submitted by /u/umichnews [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Shannon Sharpe responds to lawsuit and denies all accusationsby /u/Subject-Property-343 on April 22, 2025 at 11:18 pm
submitted by /u/Subject-Property-343 [link] [comments]
- Panthers' Matthew Tkachuk off IR ahead of Game 1by /u/Oldtimer_2 on April 22, 2025 at 10:07 pm
submitted by /u/Oldtimer_2 [link] [comments]
- Jerry Jones: Cowboys working on '2 pretty substantive trades'by /u/Oldtimer_2 on April 22, 2025 at 8:32 pm
submitted by /u/Oldtimer_2 [link] [comments]
- A crazy glitch in the matrixby /u/Subject-Property-343 on April 22, 2025 at 8:01 pm
Same game Same inning Same amount of outs Same pitch count Same pitch speed Same play submitted by /u/Subject-Property-343 [link] [comments]
- Timberwolves' Anthony Edwards fined $50,000 for obscene gesture and comment directed at hecklerby /u/Oldtimer_2 on April 22, 2025 at 7:39 pm
submitted by /u/Oldtimer_2 [link] [comments]