What are the top 10 most insane myths about computer programmers?

What are the top 10 most insane myths about computer programmers?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are the top 10 most insane myths about computer programmers?

Programmers are often seen as a eccentric breed. There are many myths about computer programmers that circulate both within and outside of the tech industry. Some of these myths are harmless misconceptions, while others can be damaging to both individual programmers and the industry as a whole.

 Here are 10 of the most insane myths about computer programmers:

1. Programmers are all socially awkward nerds who live in their parents’ basements.
2. Programmers only care about computers and have no other interests.
3. Programmers are all genius-level intellects with photographic memories.
4. Programmers can code anything they set their minds to, no matter how complex or impossible it may seem.
5. Programmers only work on solitary projects and never collaborate with others.
6. Programmers write code that is completely error-free on the first try.
7. All programmers use the same coding languages and tools.
8. Programmers can easily find jobs anywhere in the world thanks to the worldwide demand for their skills.
9. Programmers always work in dark, cluttered rooms with dozens of monitors surrounding them.
10. Programmers can’t have successful personal lives because they spend all their time working on code.”

Another Top 10 Myths about computer programmers  in details are:

Myth #1: Programmers are lazy.

This couldn’t be further from the truth! Programmers are some of the hardest working people in the tech industry. They are constantly working to improve their skills and keep up with the latest advancements in technology.

Myth #2: Programmers don’t need social skills.

While it is true that programmers don’t need to be extroverts, they do need to have strong social skills. Programmers need to be able to communicate effectively with other members of their team, as well as with clients and customers.

Myth #3: All programmers are nerds.

There is a common misconception that all programmers are nerdy introverts who live in their parents’ basements. This could not be further from the truth! While there are certainly some nerds in the programming community, there are also a lot of outgoing, social people. In fact, programming is a great field for people who want to use their social skills to build relationships and solve problems.


Myth #4: Programmers are just code monkeys.

Programmers are often seen as nothing more than people who write code all day long. However, this could not be further from the truth! Programmers are critical thinkers who use their analytical skills to solve complex problems. They are also creative people who use their coding skills to build new and innovative software applications.

Myth #5: Anyone can learn to code.

This myth is particularly damaging, as it dissuades people from pursuing careers in programming. The reality is that coding is a difficult skill to learn, and it takes years of practice to become a proficient programmer. While it is true that anyone can learn to code, it is important to understand that it is not an easy task.

Myth #6: Programmers don’t need math skills.

This myth is simply not true! Programmers use math every day, whether they’re calculating algorithms or working with big data sets. In fact, many programmers have degrees in mathematics or computer science because they know that math skills are essential for success in the field.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Myth #7: Programming is a dead-end job.

This myth likely comes from the fact that many people view programming as nothing more than code monkey work. However, this could not be further from the truth! Programmers have a wide range of career options available to them, including software engineering, web development, and data science.

Myth #8: Programmers only work on single projects.

Again, this myth likely comes from the outside world’s view of programming as nothing more than coding work. In reality, programmers often work on multiple projects at once. They may be responsible for coding new features for an existing application, developing a new application from scratch, or working on multiple projects simultaneously as part of a team.

Myth #9: Programming is easy once you know how to do it .

This myth is particularly insidious, as it leads people to believe that they can simply learn how to code overnight and become successful programmers immediately thereafter . The reality is that learning how to code takes time , practice , and patience . Even experienced programmers still make mistakes sometimes !

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Myth #10: Programmers don’t need formal education

This myth likely stems from the fact that many successful programmers are self-taught . However , this does not mean that formal education is unnecessary . Many employers prefer candidates with degrees in computer science or related fields , and formal education can give you an important foundation in programming concepts and theory .

Myth #11: That they put in immense amounts of time at the job

I worked for 38 years programming computers. During that time, there were two times that I needed to put in significant extra times at the job. The first two years, I spent more time to get acclimated to the job (which I then left at age of 22) with a Blood Pressure 153/105. Not a good situation. The second time was at the end of my career where I was the only person who could get this project completed (due to special knowledge of the area) in the timeframe required. I spent about five months putting a lot of time in.

Myth #12: They need to know advanced math

Some programmers may need to know advanced math, but in the areas where I (and others) were involved with, being able to estimate resulting values and visualization skills were more important. One needs to know that a displayed number is not correct. Visualization skills is the ability to see the “big picture” and envision the associated tasks necessary to make the big picture correctly. You need to be able to decompose each of the associated tasks to limit complexity and make it easier to debug. In general the less complex code is, the fewer errors/bugs and the easier it is to identify and fix them.

Myth #13: Programmers remember thousands lines of code.

No, we don’t. We know approximate part of the program where the problem could be. And could localize it using a debugger or logs – that’s all.

Myth #14:  Everyone could be a programmer.

No. One must have not only desire to be a programmer but also has some addiction to it. Programming is not closed or elite art. It’s just another human occupation. And as not everyone could be a doctor or a businessman – as not everyone could be a programmer.

Myth #15: Simple business request could be easily implemented

No. The ease of implementation is defined by model used inside the software. And the thing which looks simple to business owners could be almost impossible to implement without significantly changing the model – which could take weeks – and vice versa: seemingly hard business problem could sometimes be implemented in 15 minutes.

Myth #16: Please fix <put any electronic device here>or setup my printer – you are a programmer! 

Yes, I’m a programmer – neither an electronic engineer nor a system administrator. I write programs, not fix devices, setup software or hardware!

As you can see , there are many myths about computer programmers circulating within and outside of the tech industry . These myths can be damaging to both individual programmers and the industry as a whole . It’s important to dispel these myths so that we can continue attracting top talent into the field of programming !

What are the top 10 most insane myths about computer programmers?
What are the top 10 most insane myths about computer programmers?

Google’s Carbon Copy: Is Google’s Carbon Programming language the Right Successor to C++?

What are the Greenest or Least Environmentally Friendly Programming Languages?

What are popular hobbies among Software Engineers?

DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

DevOps Interviews Question and Answers and Scripts

Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).

Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:

Deployment

What is a Canary Deployment?

A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.

Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.


DevOps Interviews Question and Answers and Scripts
AWS Developer Associate DVA-C01 Exam Prep
Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep #Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648 android: https://play.google.com/store/apps/dev?id=4679760081477077763 windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0 web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)

What is a Blue Green Deployment?

Reference: Blue Green Deployment

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.

How to a  software release?

There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

How to automate the whole build and release process?

• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
manufacturing.
The above process will be done by Jenkins by creating the jobs.

Did you ever participated in Prod Deployments? If yes what is the procedure?

• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system

Devops Tools and Concepts

What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?

By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.

The key aspects or principles behind DevOps are:

  • Infrastructure as a Code
  • Continuous Integration
  • Continuous Deployment
  • Automation
  • Continuous Monitoring
  • Security

Popular tools for DevOps are:

  • Git
  • AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
  • Jenkins
  • Ansible
  • Puppet
  • Nagios
  • Docker
  • ELK (Elasticsearch, Logstash, Kibana)

Can we consider DevOps as Agile methodology?

Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.

What are some of the most popular DevOps tools?
Selenium
Puppet
Chef
Git
Jenkins
Ansible
Docker

What is the job Of HTTP REST API in DevOps?

As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.

What is Scrum?

Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team

What are Micro services, and how they control proficient DevOps rehearses?

Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.

What is Continuous Delivery?

Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.

Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.

Devops Continuous Integration vs Continuous delivery

Why Automate?

Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

What is Puppet?

Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.

What is Configuration Management?

Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.

Software Configuration Management Features are:

• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes

What are the Some Of the Most Popular Devops Tools ?

• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible

What Are the Vagrant And Its Uses?

Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.

What’s a PTR in DNS?

Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.

What testing is necessary to insure a new service is ready for production?

Continuous testing

What is Continuous Testing?

It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.

What are the key elements of continuous testing?

Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.

How does HTTP work?

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

The HTTP protocol  works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that  respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.

What is IaC? How you will achieve this?

Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.

Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.

What are patterns and anti-patterns of software delivery and deployment?

What are patterns and anti-patterns of software delivery and deployment?

What are Microservices?

Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.

  • Small
  • Decoupled
  • Owned by self-contained teams

Version Control

What is a version control system?

Version Control System (VCS) is a software that helps software developers to work together and maintain
  complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN

What is Source Control?

An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.

Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can

• Collaborate on code with your team.

• Isolate your work until it is ready.

. Quickly troubleshoot issues by identifying who made changes and what the changes were.

Source code management systems help streamline the development process and provide a centralized source for all your code.

What is Git and explain the difference between Git and SVN?

Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.

GIT SVN
Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Git contains the local repo as well as the full history of the whole project on all the developers hard drive, so if there is a server outage , you can easily do recovery from your team mates local git repo. SVN relies only on the central server to store all the versions of the project file
Push and pull operations are fast Push and pull operations are slower compared to Git
It belongs to 3rd generation Version Control Tool It belongs to 2nd generation Version Control tools
Client nodes can share the entire repositories on their local system Version history is stored on server-side repository
Commits can be done offline too Commits can be done only online
Work are shared automatically by commit Nothing is shared automatically

Describe branching strategies?

Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.

Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.

Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.

What are Pull requests?

Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements.
Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.

Linux

What is the default file permissions for the file and how can I modify it?

Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666

What is a  kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.

What is difference between grep -i and grep -v?

i ignore alphabet difference v accept this value
Example:  ls | grep -i docker
Dockerfile
docker.tar.gz
ls | grep -v docker
Desktop
Dockerfile
Documents
Downloads
You can’t see anything with name docker.tar.gz

How can you define particular space to the file?

This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1

What is concept of sudo in linux?

Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.

What are the checks to be done when a Linux build server become suddenly slow?

Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time

Jenkins

What is Jenkins?

Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team

What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool

What is continuous integration?

When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

Devops: Continuous Integration

Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.

The microservices architecture is a design approach to build a single application as a set of small services.

What are the advantages of Jenkins?

• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.

Which SCM tools does Jenkins supports?

Source code management tools supported by Jenkins are below:
• AccuRev
• CVS
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase
• RTC

I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs

In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.

How can I integrate all the tools with Jenkins?

I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.

How to install Jenkins via Docker?

The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:

• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard 
• Accessing the Jenkins/Blue Ocean Docker container:

docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:

docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash

Bash – Shell scripting

Write a shell script to add two numbers

echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”

How to get a file that consists of last 10 lines of the some other file?

Tail -10 filename >filename

How to check the exit status of the commands?

echo $?

How to get the information from file which consists of the word “GangBoard”?

grep “GangBoard” filename

How to search the files with the name of “GangBoard”?

find / -type f -name “*GangBoard*”

Write a shell script to print only prime numbers?

DevOps script to print prime numbers

How to pass the parameters to the script and how can I get those parameters?

Scriptname.sh parameter1 parameter2
Use  $* to get the parameters.

Monitoring – Refactoring

My application is not coming up for some reason? How can you bring it up?

We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.

What is multifactor authentication? What is the use of it?

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.

• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform

I want to copy the artifacts from one location to another location in cloud. How?

Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.

How to  delete 10 days older log files?

find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null

Ansible

What are the Advantages of Ansible?

• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural

What’s the use of Ansible?

Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.

What are the Pros and Cons of Ansible?

Pros:
1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
Cons:
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage

What is the difference among chef, puppet and ansible?

Ansible Supports Windows but server should be Linux/Unix YAML (Python) Single Active Node
Chef Puppet
Interoperability Works Only on Linux/Unix Works Only on Linux/Unix
Configuration Laguage Uses Ruby Pupper DSL
Availability Primary Server and Backup Server Multi Master Architecture

How to access variable names in Ansible?

Using hostvars method we can access and add the variables like below

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}

Docker

What is Docker?

Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.

What is Docker image?

Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.

What is a Docker Container?

Docker Container is the running instance of Docker Image

How to stop and restart the Docker container?

To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+

Cloud:
• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace

Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows

What are the tools used for docker networking?

For docker networking we generally use kubernets and docker swarm.

What is docker compose?

Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.

How to deploy docker container to aws?

Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.

What is the fundamental disservice of Docker holders?

As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.

What are the docker motor and docker form?

Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP

What are the Different modes does a holder can be run?

Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.

What the yield of docker assess order will be?

Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type

What is docker swarm?

Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.

What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?

Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.

How to Version control Docker pictures?

Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.

What is difference between docker image and docker container?

Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.

What is Application Containerization?

It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.

What is the syntax for building docker image?

docker build –f -t imagename:version

What is the running docker image?

docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version

How to log into a container?

docker exec –it /bin/bash

Git

What does the commit object contain?

Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).

Explain the difference between git pull and git fetch?

Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge

How do we know in Git if a branch has already been merged into master?

git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged

What is ‘Staging Area’ or ‘Index’ in GIT?

Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add

What is Git Stash?

Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.

What is Git stash drop?

Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log

What is the function of ‘git config’?

Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global user.name “Your Name”: This command will add your username.
git config –global user.email “Your E-mail Address”: This command will add your email id.

How can you create a repository in Git?

To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.

What language is used in Git?

Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.

What is SubGit?

SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.

How can you clone a Git repository via Jenkins?

First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.

What are the advantages of using Git?

1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.

What is git add?

It adds the file changes to the staging area

What is git commit? 

Commits the changes to the HEAD (staging area)

What is git push?

Sends the changes to the remote repository

What is git checkout?

Switch branch or restore working files

What is git branch?

Creates a branch

What is git fetch?

Fetch the latest history from the remote server and updates the local repo

What is git merge?

Joins two or more branches together

What is git pull?

Fetch from and integrate with another repository or a local branch (git fetch + git merge

What is git rebase?

Process of moving or combining a sequence of commits to a new base commit

What is git revert?

To revert a commit that has already been published and made public

What is git clone?

Clones the git repository and creates a working copy in the local machine

How can I modify the commit message in git?

I have to use following command and enter the required message.
Git commit –amend

How you handle the merge conflicts in git

Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.

What is Git command to send the modifications to the master branch of your remote repository

Use the command “git push origin master”

NOSQL

What are the benefits of NoSQL database on RDBMS?

Benefits:
1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected

Maven

What is Maven?

Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories

Numpy

What is Numpy

There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc. 

Why is python numpy better than lists?

Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.

Describe the map function in Python?

The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.

How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy

###

import numpy as np
a=np.random.rand(100)
print(type(a))
print(a)
###
 

will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1
python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1

How to count the occurrence of each value in a numpy array?

Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.

Ouput: [4 1 1 1 2 3 0 0 0 2]

Does Numpy Support Nan?

nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax
computers.

What does ravel() function in numpy do? 

It combines multiple numpy arrays into a single array

How to remove from one array those items that exist in another? 

>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])

How to reverse a numpy array in the most efficient way?

>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]

How to calculate percentiles when using numpy?

>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)

Output:  37.400000000000006

What Is The Difference Between Numpy And Scipy?

NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.

What Is The Preferred Way To Check For An Empty (zero Element) Array?

For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size

What Is The Difference Between Matrices And Arrays?

Matrices can only be two-dimensional, whereas arrays can have any number of
 dimensions

How can you find the indices of an array where a condition is true?

Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)

How to find the maximum and minimum value of a given flattened array?

>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)

Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis. 

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)

Find median of a numpy flattened array

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)

Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis

>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x

Calculate covariance matrix between two numpy arrays

>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)

Compute  product-moment correlation coefficients of two given numpy arrays

>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)

Develop a numpy program to compute the histogram of nums against the bins

>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)

Get the powers of an array values element-wise

>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)

Write a NumPy program to get true division of the element-wise array inputs

>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)

Panda

What is a series in pandas?

A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.

What features make Pandas such a reliable option to store tabular data?

Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.

What is re-indexing in pandas?

Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.

How will you create a series from dict in Pandas?

A Series is defined as a one-dimensional array that is capable of storing various data
types.

import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)

How can we create a copy of the series in Pandas?

Use pandas.Series.copy method
import pandas as pd
pd.Series.copy(deep=True)

 

What is groupby in Pandas?

GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and
easy.

What is vectorization in Pandas?

Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.

Different types of Data Structures in Pandas

Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.

What Is Time Series In pandas

A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.

How to convert pandas dataframe to numpy array?

The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.

Write a Pandas program to get the first 5 rows of a given DataFrame

>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]

Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data. 

>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])

Write a Python program to convert a Panda module Series to Python list and it’s type. 

>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())

Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2

Develop a Pandas program to compare the elements of the two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2

Develop a Pandas program to change the data type of given a column or a Series.

>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2

Write a Pandas program to convert Series of lists to one Series

>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)

Write a Pandas program to create a subset of a given series based on value and condition

>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s

Develop a Pandas code to alter the order of index in a given series

>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])

Write a Pandas code to get the items of a given series not present in another given series.

>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result

What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?

First one is a view of the original dataframe and second one is a copy of the original dataframe.

Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.

>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’

Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.

>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)

How will you add a column to a pandas DataFrame?

# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])

How to iterate over a Pandas DataFrame?

You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.

Python

What type of language is python? Programming or scripting?

Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.

Is python case sensitive?

Yes, python is a case sensitive language.

What is a lambda function in python?

An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.

What is the difference between xrange and xrange in python?

xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.

What are docstrings in python?

Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.

Whenever Python exits, why isn’t all the memory deallocated?

Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.

What does this mean: *args, **kwargs? And why would we use it?

We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.

What is the difference between deep and shallow copy?

Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.

Define encapsulation in Python?

Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.

Does python make use of access specifiers?

Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.

What are the generators in Python?

Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.

Write a Python script to Python to find palindrome of a sequence

a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
else:
print (“not palindrome”)

How will you remove the duplicate elements from the given list?

The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))

Does Python allow arguments Pass by Value or Pass by Reference?

Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.

What is slicing in Python?

Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.

Why is the “pass” keyword used in Python?

The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.

What are decorators in Python?

Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion

What is the key difference between lists and tuples in python?

The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.

What is self in Python?

Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.

What is PYTHONPATH in Python?

PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.

What is the difference between .py and .pyc files?

.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.

What is namespace in Python?

In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.

What is pickling and unpickling?

Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

How is Python interpreted?

Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.

Jupyter Notebook

What is the main use of a Jupyter notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.

How do I increase the cell width of the Jupyter/ipython notebook in my browser?

>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))

How do I convert an IPython Notebook into a Python file via command line?

>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb

How to measure execution time in a jupyter notebook?

>> %%time is inbuilt magic command

How to run a jupyter notebook from the command line?

>> jupyter nbconvert –to python nb.ipynb

How to make inline plots larger in jupyter notebooks?

Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)

How to display multiple images in a jupyter notebook?

>>for ima in images:
>>>plt.figure()
>>>plt.imshow(ima)

Why is the Jupyter notebook interactive code and data exploration friendly?

The ipywidgets package provides many common user interface controls for exploring code and data interactively.

What is the default formatting option in jupyter notebook?

Default formatting option is markdown

What are kernel wrappers in jupyter?

Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.

What are the advantages of custom magic commands?

Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.

Is the jupyter architecture language dependent?

No. It is language independent

Which tools allow jupyter notebooks to easily convert to pdf and html?

Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.

What is a major disadvantage of a Jupyter notebook?

It is very hard to run long asynchronous tasks. Less Secure.

In which domain is the jupyter notebook widely used?

It is mainly used for data analysis and machine learning related tasks.

What are alternatives to jupyter notebook?

PyCharm interact, VS Code Python Interactive etc.

Where can you make configuration changes to the jupyter notebook?

In the config file located at ~/.ipython/profile_default/ipython_config.py

Which magic command is used to run python code from jupyter notebook?

%run can execute python code from .py files

How to pass variables across the notebooks in Jupyter?

The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)

Export the contents of a cell/Show the contents of an external script

Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.

What inbuilt tool we use for debugging python code in a jupyter notebook?

Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.

How to make high resolution plots in a jupyter notebook?

>> %config InlineBackend.figure_format =’retina’

How can one use latex in a jupyter notebook?

When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.

What is a jupyter lab?

It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.

What is the biggest limitation for a Jupyter notebook?

Code versioning, management and debugging is not scalable in current jupyter notebook

Cloud Computing

[appbox googleplay com.cloudeducation.free]

[appbox appstore id1560083470-iphone screenshots]

Which are the different layers that define cloud architecture?

Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus

Explain Cloud Service Models?

Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)

What are Hybrid clouds?

Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.

Explain Platform as a Service (Paas)?

It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.

What is the difference in cloud computing and Mobile Cloud computing?

Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.

What are the security aspects provided with the cloud?

There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications

What are system integrators in cloud computing?

System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.

What is the usage of utility computing?

Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.

What are some large cloud providers and databases?

Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL

Explain the difference between cloud and traditional data centers.

In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.

What is hypervisor in Cloud Computing?

It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.

Define what MultiCloud is?

Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.

What is a multi-cloud strategy?

The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.

What is meant by Edge Computing, and how is it related to the cloud?

Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.

What are disadvantages of SaaS cloud computing layer

1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.

What is IaaS in Cloud Computing?

IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.

Explain what is the use of “EUCALYPTUS” in cloud computing?

EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.

Name the foremost refined and restrictive service model?

The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.

Name all the kind of virtualization that are also characteristics of cloud computing?

Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.

What Are Main Features Of Cloud Services?

Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades

What Are The Advantages Of Cloud Services?

Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.

Mention The Basic Components Of A Server Computer In Cloud Computing?

The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.

What are the advantages of auto-scaling?

Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management

[appbox googleplay com.cloudeducation.free]

[appbox appstore id1560083470-iphone screenshots]

Azure Cloud

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep
#Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure
ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648
android: https://play.google.com/store/apps/dev?id=4679760081477077763
windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0
web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)

Which Services Are Provided By Window Azure Operating System?

Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management

Which service in Azure is used to manage resources in Azure?

Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.

Which  web applications can be deployed with Azure?

Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.

What are Roles in Azure and why do we use them?

Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal.
There are 3 types of roles in Microsoft Azure:
● Web Role
● Worker Role
● VM Role
Let’s discuss each of these roles in detail:
Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications.
Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website.
VM Role – The VM role is used by a user to schedule tasks and other windows services.
This role can be used to customize the machines on which the web and worker role is running.

What is Azure as PaaS?

PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.

What are Break-fix issues in Microsoft Azure?

In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.

Explain Diagnostics in Windows Azure

Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in
Windows Azure as well as in the computer’s emulator for collecting data for a role instance.

State the difference between repetitive and minimal monitoring.

Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application.
On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.

What is the main difference between the repository and the powerhouse server?

The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.

Explain command task in Microsoft Azure

Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.

What is the difference between Azure Service Bus Queues and Storage Queues?

Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues.
Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services.
Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.

Explain Azure Service Fabric.

Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications.
The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.

Define the Azure Redis Cache.

Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.

How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?

TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.

What are the options to manage session state in Windows Azure?

● Windows Azure Caching
● SQL Azure
● Azure Table

What is cspack?

It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.

What is csrun?

It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.

How to design applications to handle connection failure in Windows Azure?

The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).

What is Windows Azure Diagnostics?

Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.

What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?

Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.

What is the use of Azure Active Directory?

Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network

Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?

This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.

What are virtual machine scale sets in Azure?

Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.

Are data disks supported within scale sets?

Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include:
● Azure files (SMB shared drives)
● OS drive
● Temp drive (local, not backed by Azure Storage)
● Azure data service (for example, Azure tables, Azure blobs)
● External data service (for example, remote database)

What is the difference between the Windows Azure Platform and Windows Azure?

The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.

What are the three main components of the Windows Azure Platform?

Compute, Storage and AppFabric.

Can you move a resource from one group to another?

Yes, you can. A resource can be moved among resource groups.

How many resource groups a subscription can have?

A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.

Explain the fault domain.

This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.

Differentiate between the repository and the powerhouse server?

Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.

Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

AWS Cloud

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Explain what S3 is?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”

What is AMI?

AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.

Mention what the relationship between an instance and AMI is?

From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

How many buckets can you create in AWS by default?

By default, you can create up to 100 buckets in each of your AWS accounts.

Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again

Explain what T2 instances is?

T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.

In VPC with private and public subnets, database servers should ideally be launched into which subnet?

With private and public subnets in VPC, database servers should ideally launch into private subnets.

Mention what the security best practices for Amazon EC2 are?

For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI

Is the property of broadcast or multicast supported by Amazon VPC?

No, currently Amazon VPI not provide support for broadcast or multicast.

How many Elastic IPs is allows you to create by AWS?

5 VPC Elastic IP addresses are allowed for each AWS account.

Explain default storage class in S3

The default storage class is a Standard frequently accessed.

What are the Roles in AWS?

Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.

What are the edge locations?

Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.

Explain snowball?

Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.

What is a redshift?

Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.

What is meant by subnet?

A large section of IP Address divided into chunks is known as subnets.

Can you establish a Peering connection to a VPC in a different region?

Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.

What is SQS?

Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

What is Amazon EMR?

EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.

What is boot time taken for the instance stored backed AMI?

The boot time for an Amazon instance store-backend AMI is less than 5 minutes.

Do you need an internet gateway to use peering connections?

Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.

How to connect an EBS volume to multiple instances?

We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.

What are the different types of Load Balancer in AWS services?

Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer

In which situation you will select provisioned IOPS over standard RDS storage?

You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.

What are the important features of Amazon cloud search?

Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice

What is AWS CDK?

AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
• Create and provision AWS infrastructure deployments predictably and repeatedly.
• Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling.
• Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
• Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.

What are best practices for controlling acccess to AWS CodeCommit?

– Create your own policy
– Provide temporary access credentials to access your repo
* Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod
* Federated access
* Multi-factor authentication

What is AWS CodeCobuild?

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.

AWS DevOps CodeBuild

DevOps How Does AWS CodeBuild Works

1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file.
2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build.
3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository.
4- Build artifacts produced are uploaded into an Amazon S3 bucket.
5- he build environment sends a notification about the build status.
6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.

What is AWS CodeDeploy?

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.

With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.

You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.

You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.

AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision

What is AWS CodeCommit?

AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.

What is AWS Opswork?

AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.

Chef and Puppet enable you to use code to automate your configurations.

AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.

AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.

AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep

Google Cloud Platform

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

What are the main advantages of using Google Cloud Platform?

Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
 information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
service
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform. 

Why should you opt to Google Cloud Hosting?

The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability

What are the libraries and tools for cloud storage on GCP?

At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
buckets
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage

There are many third party libraries and tools such as Boto Library.

What do you know about Google Compute Engine?

Google Cloud Engine is the basic component of the Google Cloud Platform. 
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.

How are the Google Compute Engine and Google App Engine related?

Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage
system.

How does the pricing model work in GCP cloud?

While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store.
The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network. 

What are the different methods for the authentication of Google Compute Engine API?

This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API:
– Using OAuth 2.0
– Through client library
– Directly with an access token

List some Database services by GCP.

There are many Google cloud database services which helps many enterprises to manage their data.
● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud.
● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity.
● Cloud Spanner
● Cloud Bigtable
● Firestore
● Firebase Realtime Database
● Memorystore
● Google Cloud Partner Services
● For more database products you can refer Google Cloud Databases
● For more data base solutions you can refer Google cloud Database solutions

What are the different Network services by GCP?

Google Cloud provides many Networking services and technologies that make easy to scale and manage your network.
● Hybrid connectivity helps to connect your infrastructure to Google Cloud
● Virtual Private Cloud (VPC) manage networking for your resources
● Cloud DNS is a highly available global domain naming system (DNS) network.
● Service Directory provides a service-centric network solution.
● Cloud Load Balancing
● Cloud CDN
● Cloud Armor
● Cloud NAT
● Network Telemetry
● VPC Service Controls
● Network Intelligence Center
● Network Service Tiers
● For more about Networking products refer Google Cloud Networking

List some Data Analytics service by GCP.

Google Cloud offers various Data Analytics services.
● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective.
● Looker
● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud.
● Dataflow
● Pub/Sub
● Cloud Data Fusion
● Data Catalog
● Cloud Composer
● Google Data Studio
● Dataprep
● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale.
● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services.
● For Google Cloud analytics services visit Data Analytics

Explain Google BigQuery in Google Cloud Platform

For traditional data warehouse, hardware setup replacement is required. In such case, Google
BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.

Explain Auto-scaling in Google cloud computing

Without human intervention, you can mechanically provision and initiate new instances in AWS.
Depending on various metrics and load, Auto-scaling is triggered.

Describe Hypervisor in Google Cloud Platform

Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.

Define VPC in the Google cloud platform

VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

References

Steve Nouri

https://www.edureka.co

https://www.kausalvikash.in

https://www.wisdomjobs.com

https://blog.edugrad.com

https://stackoverflow.com

http://www.ezdev.org

https://www.techbeamers.com

https://www.w3resource.com

https://www.javatpoint.com

https://analyticsindiamag.com

Online Interview Questions

https://www.geeksforgeeks.org

https://www.springpeople.com

https://atraininghub.com

https://www.interviewcake.com

https://www.techbeamers.com

https://www.tutorialspoint.com

programming with mosh.com

https://www.interviewbit.com

https://www.guru99.com

https://hub.packtpub.com

https://analyticsindiamag.com

https://www.dataquest.io

https://www.infoworld.com

Don’t do a connection setup per RPC.

Cache things wherever possible.

Write asynchronous code wherever possible.

Exploit eventual consistency wherever possible. Otherwise known as, coordination is expensive so don’t do it unless you have to.

Route your requests sensibly.

Locate processing wherever will result in the best latency. That might mean you need more resources.

Use LIFO queues, they have better tail statistics than FIFO. Queue before load balancing, not after, that way a small fraction of slow requests are much less likely to stall all the processors. Source: Andrew mc Gregor

What operating system do most servers use in 2022?

Of the 1500 *NIX servers under my control (a very large fortune 500 company), 90% of them are Linux. We have a small amount of HP-UX and AIX left over running legacy applications, but they are being phased out. Most of the applications we used to run on HP-UX and AIX (SAP, Oracle, you-name-it) now run on Linux. And it’s not just my company, it’s everywhere.

In 2022, the most widely used server operating system is Linux. Source: Bill Thompson

How do you load multiple files in parallel from an Amazon S3 bucket?

By specifying a file prefix of the file names in the COPY command or specifying the list of files to load in a manifest file.
 

How can you manage the amount of provisioned throughput that is used when copying from an Amazon DynamoDB table?

Set the READRATIO parameter in the COPY command to a percentage of unused throughput.
 

What you must do to use client-side encryption with your own encryption keys when using COPY to load data files that were uploaded to Amazon S3?

You must add the master key value to the credentials string with the ENCRYPTED parameter in the COPY command.

DevOps  and SysOps Breaking News – Top Stories – Jobs

  • What is the hardest technical skill with DevOps for you?
    by /u/ReverendRou (Everything DevOps) on March 18, 2024 at 5:17 pm

    As I was getting into Devops I was nervous before each stage of my learning before touching the topic. E.g before I ever touched Linux, I thought of it as this massive overwhelming topic and feared of touching it. But when I took my first steps, I realised it's super fun and really not that complicated. Same for everything else.. Learning aws, learning docker, learning IaC. I've still got plenty to learn, but for you what part of the Devops tech stack have you found the most difficult to learn? I'm about to adventure into learning Kubernetes and I have that same fear with it being an insanely large overwhelming topic. submitted by /u/ReverendRou [link] [comments]

  • Vagrant - How to connect to VM web server from host browser?
    by /u/alchemizt33 (Everything DevOps) on March 18, 2024 at 5:17 pm

    So here is the Vagrant file: Vagrant.configure("2") do |config| config.vm.box = "bento/ubuntu-22.04" config.vm.provision "shell", path: "script.sh" config.vm.network "private_network", ip: "55.55.55.5", virtualbox__intnet: true config.vm.synced_folder "./html", "/var/www/html" config.vm.hostname = "myhost.local" end The provisioner script just installs apache2, nothing more really: #!/bin/bash sudo apt-get update -y sudo apt-get upgrade -y sudo apt-get install apache2* -y So then I run vagrant up, then vagrant ssh. And run netstat: $ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.0.2.2 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 10.0.2.3 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 55.55.55.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 So I should be able to point my host webbrowser to 10.0.2.0 or 55.55.55.0 shouldn't I? It doesn't work. Maybe the apache server isn't working properly, so I test it like this: $ wget localhost --2024-03-18 17:23:54-- http://localhost/ Resolving localhost (localhost)... 127.0.0.1 Connecting to localhost (localhost)|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 380 [text/html] Saving to: ‘index.html’ index.html 100%[===================>] 380 --.-KB/s in 0s 2024-03-18 17:23:54 (32.1 MB/s) - ‘index.html’ saved [380/380] So the apache server is working, its serving the index.html file on localhost inside the guest machine. But why can't I access this at myhost.local or 5555.55.0? submitted by /u/alchemizt33 [link] [comments]

  • Maintaining two deployment pipelines (ML & app) sucks ... so we created an OCI-compliant way to package models.
    by /u/iamjessew (Everything DevOps) on March 18, 2024 at 2:45 pm

    (Similar to everyone else) we started diving into ML quite a bit last year, as we did, we quickly found that the process of handing off models between teams is a pain. We first tried working with the larger MLOps tools, this solved some of our problems but ultimately they all use their own packaging mechanism and require engineers to repackage multiple times. This led to two pipelines ... not idea. We then tried Dockerfiles for packaging our models, this worked, but since Docker files are not modular, we couldn't pull them apart like we needed to. We then had a thought, what if we just figured out a way to package our models (essentially a Jupyter Notebook) using in an OCI-compliant way. This has led to us now open sourcing our project as Kit. Kit includes a Kit CLI and ModelKit files. I know this can't be the only solution that has been thought of? And am curious what others are doing or have done in the past. submitted by /u/iamjessew [link] [comments]

  • Add new SAS disk to Azure stack HCI.
    by /u/kon_won (Sysadmin) on March 18, 2024 at 1:54 pm

    Hi everyone. I am newbie in Azure stack HCI.I have a Home Lab include 2 servers each server have 4 disk of 960Gb ssd and I create S2D with no cache read.and after that I add 2 disk of 2.4TB sas 10K to each server. the question is I need to add some SSD for cache read on each server or not ? Thank you submitted by /u/kon_won [link] [comments]

  • Learning DevOps
    by /u/Jv1312 (Everything DevOps) on March 18, 2024 at 1:45 pm

    I am about to graduate with a Masters degree in Cybersecurity as an International student in the USA. As most of the job openings in the field of cybersecurity are requiring 5+ YOE. I thought of learning DevOps and then get a job and transition to cybersecurity. My Bachelor degree is in Computer Science and Engineering. I am asking of a general pathway about what to learn, where to learn and which roles to apply for. I don't have any work experience (not even an internship) in my whole academic career. The only work experience I have is being a Grader for Database subject. Tbh, I am desperate for a job as I just don't want to live off my parent's money. submitted by /u/Jv1312 [link] [comments]

  • New sites and blogs you recommend?
    by /u/EAsapphire (Sysadmin) on March 18, 2024 at 1:45 pm

    Hello, Looking for some news sites and blogs about the field and industry, as well as Intune or Jamf, that I can look at daily to stay up to date and informed. submitted by /u/EAsapphire [link] [comments]

  • Help with VMware fake DNS
    by /u/MrPruttSon (Sysadmin) on March 18, 2024 at 1:43 pm

    So I'm a bit rusty on everything network related which means I need help here. I have VMware workstation set up with one VCSA, a vyos router and a windows machine. What I need to do is to use the vyos router as a dns with a specific IP, say 10.10.10.10. The VCSA needs to be on another IP, say 20.20.20.20. And the windows machine is there to manage the VCSA on the same network. In other words, I need to fake a production environment for a specific task and the VCSA NEEDS DNS to install correctly. Currently, I have two interfaces in the Virtual Network Editor, one with 10.10.10.0 and one with 20.20.20.0. All networks are /24. I set up eth1 with 20.20.20.1, added the network to the windows machine but I cannot reach the router. Do I need to set the router up with DNS / DHCP first even though I use static routes? submitted by /u/MrPruttSon [link] [comments]

  • [Support] Outlook 2016 failing to start possibly nuking profiles
    by /u/h04x_fr34k (Sysadmin) on March 18, 2024 at 1:42 pm

    Most of my Outlook 2016 clients are failing to start, with 3 subsequent error popups (failed to start, not enough system resources, failed to start) Seems to be since the last Patch Tuesday update. No other changes have been made. Repair pst does not do anything. Running office repair doesn't fix anything. Only fix is to create a new Profile and readd all email adresses to that new Profile. But then other issues appear like Shared Calendar permission issues. Switching back to the original profile then also nukes the newly created one. any suggested fixes and help appreciated. ​ ​ ​ submitted by /u/h04x_fr34k [link] [comments]

  • Is there a "go-to" application that I can run in my local cluster to play with?
    by /u/No_Connection1258 (Everything DevOps) on March 18, 2024 at 1:40 pm

    A lot of times I setup a KinD cluster and install Argo, Prometheus and other stuff to tinker with. Currently I'm trying to learn how to use KEDA to see how I can use Prometheus metrics to scale my app, but to test this I need an application that exposes metrics for Prometheus to pick up. Hope I'm asking the right question. Is there an app I can easily deploy to my cluster for this purpose? Thanks submitted by /u/No_Connection1258 [link] [comments]

  • MFA app for China users on desktop
    by /u/Accomplished-Cat-698 (Sysadmin) on March 18, 2024 at 1:36 pm

    Hi all, is there a method for people in China to use multi-factor authentication on their desktops that doesn't require the use of a cellphone? Salesforce is requiring MFA but everything I'm reading is telling me that the only way for our China users to comply with MFA is to download an app on their phones. submitted by /u/Accomplished-Cat-698 [link] [comments]

  • Advice for refusing last minute OOH / out of remit work?
    by /u/FuunKeeMonkey (Sysadmin) on March 18, 2024 at 1:34 pm

    Hi All, Been in IT for 10 years now, currently in an Infrastructure Engineering role (Infra based for the last 6 years). Now I should preface that I know the job does require OOH and flexability from time to time. That being said, I've been in my current position at company for just over 10 months. Increasingly, last minute requests to patch prior to audits we had no notice of, supporting systems that are not part of Infrastructure remit (Door entry). Expected to cover Service Desk when our only onsite technician is on holiday, even when a team of 4 is available down south. Time and time again, I'm told that this isn't how it should be, but it's only getting worse. To confirm, non of these instances are causing outages or unplanned downtime. We're expected to pick these up under 'all being part of the same team and working together'. My epxierence from this is people not getting their actual work done, and then having to put in more and more time out of normal hours to keep up. The rest of the team are happily taking on 3 hours of hotfixing with 0 notice, they love the overtime, all actively admitting that they're putting their job before family etc for money. This couldn't be further from who I am personally. Further to the above, I'm the only engineer based in my region, meaning even when not on-call, I'm expected to act as such for my geographical location. More recently, I've started putting my line in the sand... "I've got plans made already' 'Sorry not enough notice' 'I'm not on OOH tonight'. It's being met with more and more awkwardness from the team. While I understand there are always two sides to a story, this seems to be largely down to a lack of management and forthought, everything is a last minute endevour, at least from my eyes. I've read a lot of these nightmarish workplaces here, but my first time truly experiencing it first hand. TL;DR - Drawing the line at last minute OOH work and things out of my remit, especially when not on call. How am I best to do this professionally, while still standing my ground and not folding? Inb4 - Find another job 🙂 Please redirect me to the correct sub if 'this ain't it' 🙂 ​ Edit - Should confirm that we are a team of 5 (myself included) however I am the only one based within my region. submitted by /u/FuunKeeMonkey [link] [comments]

  • DevOps Security - Data Protection best practices
    by /u/GitProtect (Everything DevOps) on March 18, 2024 at 1:19 pm

    https://gitprotect.io/blog/devops-security-data-protection-best-practices/ submitted by /u/GitProtect [link] [comments]

  • Powershell Script works when run Locally but not through pulseway.
    by /u/invest0rZ (Sysadmin) on March 18, 2024 at 1:18 pm

    submitted by /u/invest0rZ [link] [comments]

  • Cybersecurity team staff exempt from device management?
    by /u/lighthills (Sysadmin) on March 18, 2024 at 1:17 pm

    submitted by /u/lighthills [link] [comments]

  • GCP VMs and M365 License
    by /u/sys2024 (Sysadmin) on March 18, 2024 at 12:41 pm

    Hi everyone, I need to build an IT infrastructure for a small hotel with 30 Windows users. The MSP I work with suggested having a DC and all servers on GCP (Google Cloud Platform). They will provide server management, patching, and help desk for the users. Our current laptops and desktops will act as "thin clients," and we will RDP to Google virtual desktops to use the PMS (Property Management System). We will have M365 installed on the endpoints and the virtual desktops. They also suggested using M365 Business Standard licenses. When I asked about MDM, they said I wouldn't really need one since everything can be achieved with GPOs. For endpoint and email security and filtering, they offered Harmony EDR by Checkpoint. M365 Business Premium costs 40% more. Should I push for BP licenses to have Intune? There will be maybe 10 people who need laptops and access from home. (VPN with MFA will be provided) We provide phones for our employees. How important is MDM for the phones? They will only access Outlook. If the DC will be on GCP with BP licenses, how will it need to be set up? Will AD be synced with Entra ID, and then I will be able to enroll endpoints to Intune? I apologize for noob questions; this is all very new to me. Until now, my only experience was with on-premises servers and managed environments. submitted by /u/sys2024 [link] [comments]

  • Windows update event viewer
    by /u/ironclad_network (Sysadmin) on March 18, 2024 at 12:08 pm

    Hi! Sorry for English as it is not my native language. We have some servers which are controlled by GPO's that tells the servers when to update and restart. Updates are released via WSUS. With this setup, nothing is really stopping the users from opening Windows update GUI and clicking "install update". Some of our servers have installed and booted outside their settings and I suspect some of the users are updating (either mistakingly or on purpose) What I need is to find events/traces as to who and when clicked "update now" on these servers. I struggle to find these events in event viewer. Does anyone have any tips? I really appreciate any help! submitted by /u/ironclad_network [link] [comments]

  • Mail flow auto-forwarded report is empty
    by /u/EW_IO (Sysadmin) on March 18, 2024 at 12:07 pm

    In exchange admin center, when going to mailflow reports and chosing auto-forwarded messages report. It shows nothing just an empty report. Have you seen this before? Do I need to do something for the report to show? submitted by /u/EW_IO [link] [comments]

  • Best Practices for Distributed Deployment Of Grafana Loki?
    by /u/Spirited_Arm_5179 (Everything DevOps) on March 18, 2024 at 11:59 am

    Hi All, We are starting out using Loki for the first time. We have multiple cloud regions across Asia, and are wondering whats the best way to deploy Loki. Do we deploy 1 Loki in each Cloud AZ? Or is it better to have 1 single Loki instance here in HQ, and have all AZs forward their logs to HQ? I imagine the latter is simpler, but the network bill will cost a bomb. If its the first option, then how can we have a single UI that query across multiple Loki Instances? (If theres such a thing). submitted by /u/Spirited_Arm_5179 [link] [comments]

  • Whitelisting Printers in App Control/WDAC
    by /u/Big-Sink9336 (Sysadmin) on March 18, 2024 at 11:56 am

    submitted by /u/Big-Sink9336 [link] [comments]

  • Linux admins, what should i learn next if i don't deal with Linux on daily basis
    by /u/hwehwegwhgewe23 (Sysadmin) on March 18, 2024 at 11:46 am

    2 years into this field, Im a little bit like a jack of all trade on prem, i do VMware, Fortigate, MSA storage, Proxmox MG, Exchange, Veeam, and a little bit of Linux. For Linux its just basic things, for example setting up a vm, partitioning and such things, recently been learning about LVM, and expanding the LVM partitions when more space is needed, those commands and the process of expanding is not that hard, but the concept of LVM has been a little abstract, so took some time. What are some other things i should know and learn, if i don't deal with Linux on daily basis, things that are a must know for Linux, so that i still know how to deal with it. submitted by /u/hwehwegwhgewe23 [link] [comments]

Top 30 AWS Certified Developer Associate Exam Tips

AWS Certified Developer Associate Exam Prep

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Top 30 AWS Certified Developer Associate Exam Tips

AWS Certified Developer Associate Exam Prep Urls

Get the free app at: android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

iOs: https://apps.apple.com/ca/app/aws-certified-developer-assoc/id1511211095

PRO version with mock exam android: https://play.google.com/store/apps/details?id=com.awscertdevassociateexampreppro.enoumen

PRO version with mock exam ios: https://apps.apple.com/ca/app/aws-certified-dev-ass-dva-c01/id1506519319t


Top 30 AWS Certified Developer Associate Exam Tips
Top 30 AWS Certified Developer Associate Exam Tips

12

What to study: AMAZON CLOUDFRONT  
AWS topics for DVA-C01: AMAZON CLOUDFRONT

18

Know what instance types can be launched from which types of AMIs, and which instance types require an HVM AMI
AWS HVM AMI

19

Have a good understanding of how Route53 supports all of the different DNS record types, and when you would use certain ones over others.
Route 53 supports all of the different DNS record types

20

Know which services have native encryption at rest within the region, and which do not.
AWS Services with native Encryption at rest

21

Kinesis Sharding:
#AWS Kinesis Sharding

22

Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )
#AWS Handling SSL Certificates in ELB ( Wildcard certificate vs SNI )

23

Different types of Aurora Endpoints
#AWS Different types of Aurora Endpoints

24

The Default Termination Policy for Auto Scaling Group (Oldest launch configuration vs Instance Protection)
#AWS Default Termination Policy for Auto Scaling Group

25

Use AWS Cheatsheets – I also found the cheatsheets provided by Tutorials Dojo very helpful. In my opinion, it is better than Jayendrapatil Patil’s blog since it contains more updated information that complements your review notes.
#AWS Cheat Sheet

26

Watch this exam readiness 3hr video, it very recent webinar this provides what is expected in the exam.
#AWS Exam Prep Video

27

Start off watching Ryan’s videos. Try and completely focus on the hands on. Take your time to understand what you are trying to learn and achieve in those LAB Sessions.
#AWS Exam Prep Video

28

Do not rush into completing the videos. Take your time and hone the basics. Focus and spend a lot of time for the back bone of AWS infrastructure – Compute/EC2 section, Storage (S3/EBS/EFS), Networking (Route 53/Load Balancers), RDS, VPC, Route 3. These sections are vast, with lot of concepts to go over and have loads to learn. Trust me you will need to thoroughly understand each one of them to ensure you pass the certification comfortably.
#AWS Exam Prep Video

29

Make sure you go through resources section and also AWS documentation for each components. Go over FAQs. If you have a question, please post it in the community. Trust me, each answer here helps you understand more about AWS.
#AWS Faqs

30

Like any other product/service, each AWS offering has a different flavor. I will take an example of EC2 (Spot/Reserved/Dedicated/On Demand etc.). Make sure you understand what they are, what are the pros/cons of each of these flavors. Applies for all other offerings too.
#AWS Services

31

Follow Neal K Davis on Linkedin and Read his updates about DVA-C01
#AWS Services

What is the AWS Certified Developer Associate Exam?

The AWS Certified Developer – Associate examination is intended for individuals who perform a development role and have one or more years of hands-on experience developing and maintaining an AWS-based application. It validates an examinee’s ability to:

  • Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
  • Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS

There are two types of questions on the examination:

  • Multiple-choice: Has one correct response and three incorrect responses (distractors).
  • Provide implementation guidance based on best practices to the organization throughout the lifecycle of the project.

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective. Unanswered questions are scored as incorrect; there is no penalty for guessing.

To succeed with the real exam, do not memorize the answers below. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Top

AWS Certified Developer Associate info and details

The AWS Certified Developer Associate Exam is a multiple choice, multiple answer exam. Here is the Exam Overview:

Top

Other AWS Facts and Summaries and Questions/Answers Dump

Top

Additional Information for reference

Below are some useful reference links that would help you to learn about AWS Practitioner Exam.

Other Relevant and Recommended AWS Certifications

AWS Certification Exams Roadmap AWS Certification Exams Roadmap[/caption]

AWS Developer Associate Exam Whitepapers:

AWS has provided whitepapers to help you understand the technical concepts. Below are the recommended whitepapers.

Top

Online Training and Labs for AWS Certified Developer Associate Exam

Top

AWS Certified Developer Associate Jobs

What are the corresponding Azure and Google Cloud services for each of the AWS services?

Azure Administrator AZ-104 Exam Questions and Answers Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are the corresponding Azure and Google Cloud services for each of the AWS services?

What are unique distinctions and similarities between AWS, Azure and Google Cloud services? For each AWS service, what is the equivalent Azure and Google Cloud service? For each Azure service, what is the corresponding Google Service? AWS Services vs Azure vs Google Services? Side by side comparison between AWS, Google Cloud and Azure Service?

For a better experience, use the mobile app here.

AWS vs Azure vs Google
What are the corresponding  Azure and Google Cloud services for each of the AWS services?
AWS vs Azure vs Google Mobile App
Cloud Practitioner Exam Prep:  AWS vs Azure vs Google
Cloud Practitioner Exam Prep: AWS vs Azure vs Google

1

Category: Marketplace
Easy-to-deploy and automatically configured third-party applications, including single virtual machine or multiple virtual machine solutions.
References:
[AWS]:AWS Marketplace
[Azure]:Azure Marketplace
[Google]:Google Cloud Marketplace
Tags: #AWSMarketplace, #AzureMarketPlace, #GoogleMarketplace
Differences: They are both digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on their respective cloud platform.


3


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Category: AI and machine learning
Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.
References:
[AWS]:Alexa Skills Kit (enables a developer to build skills, also called conversational applications, on the Amazon Alexa artificial intelligence assistant.)
[Azure]:Microsoft Bot Framework (building enterprise-grade conversational AI experiences.)
[Google]:Google Assistant Actions ( developer platform that lets you create software to extend the functionality of the Google Assistant, Google’s virtual personal assistant,)

Tags: #AlexaSkillsKit, #MicrosoftBotFramework, #GoogleAssistant
Differences: One major advantage Google gets over Alexa is that Google Assistant is available to almost all Android devices.

4

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Category: AI and machine learning
Description:API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.
References:
[AWS]:Amazon Lex (building conversational interfaces into any application using voice and text.)
[Azure]:Azure Speech Services(unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription)
[Google]:Google APi.ai, AI Hub (Hosted repo of plug-and-play AI component), AI building blocks(for developers to add sight, language, conversation, and structured data to their applications.), AI Platform(code-based data science development environment, lets ML developers and data scientists quickly take projects from ideation to deployment.), DialogFlow (Google-owned developer of human–computer interaction technologies based on natural language conversations. ), TensorFlow(Open Source Machine Learning platform)

Tags: #AmazonLex, #CogintiveServices, #AzureSpeech, #Api.ai, #DialogFlow, #Tensorflow
Differences: api.ai provides us with such a platform which is easy to learn and comprehensive to develop conversation actions. It is a good example of the simplistic approach to solving complex man to machine communication problem using natural language processing in proximity to machine learning. Api.ai supports context based conversations now, which reduces the overhead of handling user context in session parameters. On the other hand in Lex this has to be handled in session. Also, api.ai can be used for both voice and text based conversations (assistant actions can be easily created using api.ai).

5

Category: AI and machine learning
Description:Computer Vision: Extract information from images to categorize and process visual data.
References:
[AWS]:Amazon Rekognition (based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use.)
[Azure]:Cognitive Services(bring AI within reach of every developer—without requiring machine-learning expertise.)
[Google]:Google Vision (offers powerful pre-trained machine learning models through REST and RPC APIs.)
Tags: AmazonRekognition, #GoogleVision, #AzureSpeech
Differences: For now, only Google Cloud Vision supports batch processing. Videos are not natively supported by Google Cloud Vision or Amazon Rekognition. The Object Detection functionality of Google Cloud Vision and Amazon Rekognition is almost identical, both syntactically and semantically.
Differences:
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs.

6

Category: Big data and analytics: Data warehouse
Description:Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.
References:
[AWS]:AWS Redshift (scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake.), Amazon Redshift Data Lake Export (Save query results in an open format),Amazon Redshift Federated Query(Run queries n line transactional data), Amazon Redshift RA3(Optimize costs with up to 3x better performance), AQUA: AQUA: Advanced Query Accelerator for Amazon Redshift (Power analytics with a new hardware-accelerated cache), UltraWarm for Amazon Elasticsearch Service(Store logs at ~1/10th the cost of existing storage tiers )
[Azure]:Azure Synapse formerly SQL Data Warehouse (limitless analytics service that brings together enterprise data warehousing and Big Data analytics.)
[Google]:BigQuery (RESTful web service that enables interactive analysis of massive datasets working in conjunction with Google Storage. )
Tags:#AWSRedshift, #GoogleBigQuery, #AzureSynapseAnalytics
Differences: Loading data, Managing resources (and hence pricing), Ecosystem. Ecosystem is where Redshift is clearly ahead of BigQuery. While BigQuery is an affordable, performant alternative to Redshift, they are considered to be more up and coming

7

Category: Big data and analytics: Data warehouse
Description: Apache Spark-based analytics platform. Managed Hadoop service. Data orchestration, ETL, Analytics and visualization
References:
[AWS]:EMR, Data Pipeline, Kinesis Stream, Kinesis Firehose, Glue, QuickSight, Athena, CloudSearch
[Azure]:Azure Databricks, Data Catalog Cortana Intelligence, HDInsight, Power BI, Azure Datafactory, Azure Search, Azure Data Lake Anlytics, Stream Analytics, Azure Machine Learning
[Google]:Cloud DataProc, Machine Learning, Cloud Datalab
Tags:#EMR, #DataPipeline, #Kinesis, #Cortana, AzureDatafactory, #AzureDataAnlytics, #CloudDataProc, #MachineLearning, #CloudDatalab
Differences: All three providers offer similar building blocks; data processing, data orchestration, streaming analytics, machine learning and visualisations. AWS certainly has all the bases covered with a solid set of products that will meet most needs. Azure offers a comprehensive and impressive suite of managed analytical products. They support open source big data solutions alongside new serverless analytical products such as Data Lake. Google provide their own twist to cloud analytics with their range of services. With Dataproc and Dataflow, Google have a strong core to their proposition. Tensorflow has been getting a lot of attention recently and there will be many who will be keen to see Machine Learning come out of preview.

8

Category: Virtual servers
Description:Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.
Batch: Run large-scale parallel and high-performance computing applications efficiently in the cloud.
References:
[AWS]:Elastic Compute Cloud (EC2), Amazon Bracket(Explore and experiment with quantum computing), Amazon Ec2 M6g Instances (Achieve up to 40% better price performance), Amazon Ec2 Inf1 instancs (Deliver cost-effective ML inference), AWS Graviton2 Processors (Optimize price performance for cloud workloads), AWS Batch, AWS AutoScaling, VMware Cloud on AWS, AWS Local Zones (Run low latency applications at the edge), AWS Wavelength (Deliver ultra-low latency applications for 5G devices), AWS Nitro Enclaves (Further protect highly sensitive data), AWS Outposts (Run AWS infrastructure and services on-premises)
[Azure]:Azure Virtual Machines, Azure Batch, Virtual Machine Scale Sets, Azure VMware by CloudSimple
[Google]:Compute Engine, Preemptible Virtual Machines, Managed instance groups (MIGs), Google Cloud VMware Solution by CloudSimple
Tags: #AWSEC2, #AWSBatch, #AWSAutoscaling, #AzureVirtualMachine, #AzureBatch, #VirtualMachineScaleSets, #AzureVMWare, #ComputeEngine, #MIGS, #VMWare
Differences: There is very little to choose between the 3 providers when it comes to virtual servers. Amazon has some impressive high end kit, on the face of it this sound like it would make AWS a clear winner. However, if your only option is to choose the biggest box available you will need to make sure you have very deep pockets, and perhaps your money may be better spent re-architecting your apps for horizontal scale.Azure’s remains very strong in the PaaS space and now has a IaaS that can genuinely compete with AWS
Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.

9

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Category: Containers and container orchestrators
Description: A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments.
References:
[AWS]:EC2 Container Service (ECS), Fargate(Run containers without anaging servers or clusters), EC2 Container Registry(managed AWS Docker registry service that is secure, scalable, and reliable.), Elastic Container Service for Kubernetes (EKS: runs the Kubernetes management infrastructure across multiple AWS Availability Zones), App Mesh( application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure)
[Azure]:Azure Container Instances, Azure Container Registry, Azure Kubernetes Service (AKS), Service Fabric Mesh
[Google]:Google Container Engine, Container Registry, Kubernetes Engine
Tags:#ECS, #Fargate, #EKS, #AppMesh, #ContainerEngine, #ContainerRegistry, #AKS
Differences: Google Container Engine, AWS Container Services, and Azure Container Instances can be used to run docker containers. Google offers a simple and very capable set of services that are easy to understand. However, with availability in only 5 regions it does not have the coverage of the other players.

10

Category: Serverless
Description: Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
References:
[AWS]:AWS Lambda
[Azure]:Azure Functions
[Google]:Google Cloud Functions
Tags:#AWSLAmbda, #AzureFunctions, #GoogleCloudFunctions
Differences: Both AWS Lambda and Microsoft Azure Functions and Google Cloud Functions offer dynamic, configurable triggers that you can use to invoke your functions on their platforms. AWS Lambda, Azure and Google Cloud Functions support Node.js, Python, and C#. The beauty of serverless development is that, with minor changes, the code you write for one service should be portable to another with little effort – simply modify some interfaces, handle any input/output transforms, and an AWS Lambda Node.JS function is indistinguishable from a Microsoft Azure Node.js Function. AWS Lambda provides further support for Python and Java, while Azure Functions provides support for F# and PHP. AWS Lambda is built from the AMI, which runs on Linux, while Microsoft Azure Functions run in a Windows environment. AWS Lambda uses the AWS Machine architecture to reduce the scope of containerization, letting you spin up and tear down individual pieces of functionality in your application at will.

11

Category: Relational databases
Description: Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
References:
[AWS]:AWS RDS(MySQL and PostgreSQL-compatible relational database built for the cloud,), Aurora(MySQL and PostgreSQL-compatible relational database built for the cloud)
[Azure]:SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL
[Google]:Cloud SQL
Tags: #AWSRDS, #AWSAUrora, #AzureSQlDatabase, #AzureDatabaseforMySQL, #GoogleCloudSQL
Differences: All three providers boast impressive relational database offering. RDS supports an impressive range of managed relational stores while Azure SQL Database is probably the most advanced managed relational database available today. Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.

12

Category: NoSQL, Document Databases
Description:A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.
References:
[AWS]:DynamoDB (key-value and document database that delivers single-digit millisecond performance at any scale.), SimpleDB ( a simple web services interface to create and store multiple data sets, query your data easily, and return the results.), Managed Cassandra Services(MCS)
[Azure]:Table Storage, DocumentDB, Azure Cosmos DB
[Google]:Cloud Datastore (handles sharding and replication in order to provide you with a highly available and consistent database. )
Tags:#AWSDynamoDB, #SimpleDB, #TableSTorage, #DocumentDB, AzureCosmosDB, #GoogleCloudDataStore
Differences:DynamoDB and Cloud Datastore are based on the document store database model and are therefore similar in nature to open-source solutions MongoDB and CouchDB. In other words, each database is fundamentally a key-value store. With more workloads moving to the cloud the need for NoSQL databases will become ever more important, and again all providers have a good range of options to satisfy most performance/cost requirements. Of all the NoSQL products on offer it’s hard not to be impressed by DocumentDB; Azure also has the best out-of-the-box support for cross-region geo-replication across its database offerings.

13

Category:Caching
Description:An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
References:
[AWS]:AWS ElastiCache (works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times.)
[Azure]:Azure Cache for Redis (based on the popular software Redis. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores.)
[Google]:Memcache (In-memory key-value store, originally intended for caching)
Tags:#Redis, #Memcached
<Differences: They all support horizontal scaling via sharding.They all improve the performance of web applications by allowing you to retrive information from fast, in-memory caches, instead of relying on slower disk-based databases.”, “Differences”: “ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys. ElastiCache supports Memcached and Redis. Memcached Cloud provides various data persistence options as well as remote backups for disaster recovery purposes. Redis offers persistence to disk, Memcache does not. This can be very helpful if you cache lots of data, since you remove the slowness around having a fully cold cache. Redis also offers several extra data structures that Memcache doesn’t— Lists, Sets, Sorted Sets, etc. Memcache only has Key/Value pairs. Memcache is multi-threaded. Redis is single-threaded and event driven. Redis is very fast, but it’ll never be multi-threaded. At hight scale, you can squeeze more connections and transactions out of Memcache. Memcache tends to be more memory efficient. This can make a big difference around the magnitude of 10s of millions or 100s of millions of keys.

14

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Category: Security, identity, and access
Description:Authentication and authorization: Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.
References:
[AWS]:Identity and Access Management (IAM), AWS Organizations, Multi-Factor Authentication, AWS Directory Service, Cognito(provides solutions to control access to backend resources from your app), Amazon Detective (Investigate potential security issues), AWS IAM Access Analyzer(Easily analyze resource accessibility)
[Azure]:Azure Active Directory, Azure Subscription Management + Azure RBAC, Multi-Factor Authentication, Azure Active Directory Domain Services, Azure Active Directory B2C, Azure Policy, Management Groups
[Google]:Cloud Identity, Identity Platform, Cloud IAM, Policy Intelligence, Cloud Resource Manager, Cloud Identity-Aware Proxy, Context-aware accessManaged Service for Microsoft Active Directory, Security key enforcement, Titan Security Key
Tags: #IAM, #AWSIAM, #AzureIAM, #GoogleIAM, #Multi-factorAuthentication
Differences: One unique thing about AWS IAM is that accounts created in the organization (not through federation) can only be used within that organization. This contrasts with Google and Microsoft. On the good side, every organization is self-contained. On the bad side, users can end up with multiple sets of credentials they need to manage to access different organizations. The second unique element is that every user can have a non-interactive account by creating and using access keys, an interactive account by enabling console access, or both. (Side note: To use the CLI, you need to have access keys generated.)

15

Category: Object Storage and Content delivery
Description:Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export(used to move large amounts of data into and out of the Amazon Web Services public cloud using portable storage devices for transport.), Snowball( petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud), CloudFront( content delivery network (CDN) is massively scaled and globally distributed), Elastic Block Store (EBS: high performance block storage service), Elastic File System(shared, elastic file storage system that grows and shrinks as you add and remove files.), S3 Infrequent Access (IA: is for data that is accessed less frequently, but requires rapid access when needed. ), S3 Glacier( long-term storage of data that is infrequently accessed and for which retrieval latency times of 3 to 5 hours are acceptable.), AWS Backup( makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.), Storage Gateway(hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage), AWS Import/Export Disk(accelerates moving large amounts of data into and out of AWS using portable storage devices for transport)
[Azure]:
Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:
Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:
Source: All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.

16

Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT (Internet of Things), AWS Greengrass, Kinesis Firehose, Kinesis Streams, AWS IoT Things Graph
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.

17

Category:Web Applications
Description:Managed hosting platform providing easy to use services for deploying and scaling web applications and services. API Gateway is a a turnkey solution for publishing APIs to external and internal consumers. Cloudfront is a global content delivery network that delivers audio, video, applications, images, and other files.
References:
[AWS]:Elastic Beanstalk (for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS), AWS Wavelength (for delivering ultra-low latency applications for 5G), API Gateway (makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.), CloudFront (web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations.),Global Accelerator ( improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.)AWS AppSync (simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources: GraphQL service with real-time data synchronization and offline programming features. )
[Azure]:App Service, API Management, Azure Content Delivery Network, Azure Content Delivery Network
[Google]:App Engine, Cloud API, Cloud Enpoint, APIGee
Tags: #AWSElasticBeanstalk, #AzureAppService, #GoogleAppEngine, #CloudEnpoint, #CloudFront, #APIgee
Differences: With AWS Elastic Beanstalk, developers retain full control over the AWS resources powering their application. If developers decide they want to manage some (or all) of the elements of their infrastructure, they can do so seamlessly by using Elastic Beanstalk’s management capabilities. AWS Elastic Beanstalk integrates with more apps than Google App Engines (Datadog, Jenkins, Docker, Slack, Github, Eclipse, etc..). Google App Engine has more features than AWS Elastic BEanstalk (App Identity, Java runtime, Datastore, Blobstore, Images, Go Runtime, etc..). Developers describe Amazon API Gateway as “Create, publish, maintain, monitor, and secure APIs at any scale”. Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. On the other hand, Google Cloud Endpoints is detailed as “Develop, deploy and manage APIs on any Google Cloud backend”. An NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API development and provides insight with Google Cloud Monitoring, Cloud Trace, Google Cloud Logging and Cloud Trace.

18

Category:Encryption
Description:Helps you protect and safeguard your data and meet your organizational security and compliance commitments.
References:
[AWS]:Key Management Service AWS KMS, CloudHSM
[Azure]:Key Vault
[Google]:Encryption By Default at Rest, Cloud KMS
Tags:#AWSKMS, #Encryption, #CloudHSM, #EncryptionAtRest, #CloudKMS
Differences: AWS KMS, is an ideal solution for organizations that want to manage encryption keys in conjunction with other AWS services. In contrast to AWS CloudHSM, AWS KMS provides a complete set of tools to manage encryption keys, develop applications and integrate with other AWS services. Google and Azure offer 4096 RSA. AWS and Google offer 256 bit AES. With AWs, you can bring your own key

19

Category:Internet of things (IoT)
Description:A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale. Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.
References:
[AWS]:AWS IoT, AWS Greengrass, Kinesis Firehose ( captures and loads streaming data in storage and business intelligence (BI) tools to enable near real-time analytics in the AWS cloud), Kinesis Streams (for rapid and continuous data intake and aggregation.), AWS IoT Things Graph (makes it easy to visually connect different devices and web services to build IoT applications.)
[Azure]:Azure IoT Hub, Azure IoT Edge, Event Hubs, Azure Digital Twins, Azure Sphere
[Google]:Google Cloud IoT Core, Firebase, Brillo, Weave, CLoud Pub/SUb, Stream Analysis, Big Query, Big Query Streaming API
Tags:#IoT, #InternetOfThings, #Firebase
Differences:AWS and Azure have a more coherent message with their products clearly integrated into their respective platforms, whereas Google Firebase feels like a distinctly separate product.

20

Category:Object Storage and Content delivery
Description: Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.
References:
[AWS]:Simple Storage Services (S3), Import/Export Snowball, CloudFront, Elastic Block Store (EBS), Elastic File System, S3 Infrequent Access (IA), S3 Glacier, AWS Backup, Storage Gateway, AWS Import/Export Disk, Amazon S3 Access Points(Easily manage access for shared data)
[Azure]:Azure Blob storage, File Storage, Data Lake Store, Azure Backup, Azure managed disks, Azure Files, Azure Storage cool tier, Azure Storage archive access tier, Azure Backup, StorSimple, Import/Export
[Google]:Cloud Storage, GlusterFS, CloudCDN
Tags:#S3, #AzureBlobStorage, #CloudStorage
Differences:All providers have good object storage options and so storage alone is unlikely to be a deciding factor when choosing a cloud provider. The exception perhaps is for hybrid scenarios, in this case Azure and AWS clearly win. AWS and Google’s support for automatic versioning is a great feature that is currently missing from Azure; however Microsoft’s fully managed Data Lake Store offers an additional option that will appeal to organisations who are looking to run large scale analytical workloads. If you are prepared to wait 4 hours for your data and you have considerable amounts of the stuff then AWS Glacier storage might be a good option. If you use the common programming patterns for atomic updates and consistency, such as etags and the if-match family of headers, then you should be aware that AWS does not support them, though Google and Azure do. Azure also supports blob leasing, which can be used to provide a distributed lock.

21

Category: Backend process logic
Description: Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.
References:
[AWS]:AWS Step Functions ( lets you build visual workflows that enable fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code.)
[Azure]:Logic Apps (cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.)
[Google]:Dataflow ( fully managed service for executing Apache Beam pipelines within the Google Cloud Platform ecosystem.)
Tags:#AWSStepFunctions, #LogicApps, #Dataflow
Differences: AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. AWS Step Functions belongs to \”Cloud Task Management\” category of the tech stack, while Google Cloud Dataflow can be primarily classified under \”Real-time Data Processing\”. According to the StackShare community, Google Cloud Dataflow has a broader approval, being mentioned in 32 company stacks & 8 developers stacks; compared to AWS Step Functions, which is listed in 19 company stacks and 7 developer stacks.

22

Category: Enterprise application services
Description:Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.
References:
[AWS]:Amazon WorkMail, Amazon WorkDocs, Amazon Kendra (Sync and Index)
[Azure]:Office 365
[Google]:G Suite
Tags: #AmazonWorkDocs, #Office365, #GoogleGSuite
Differences: G suite document processing applications like Google Docs are far behind Office 365 popular Word and Excel software, but G Suite User interface is intuite, simple and easy to navigate. Office 365 is too clunky. Get 20% off G-Suite Business Plan with Promo Code: PCQ49CJYK7EATNC

23

Category: Networking
Description: Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.
References:
[AWS]:Virtual Private Cloud (VPC), Cloud virtual networking, Subnets, Elastic Network Interface (ENI), Route Tables, Network ACL, Secutity Groups, Internet Gateway, NAT Gateway, AWS VPN Gateway, AWS Route 53, AWS Direct Connect, AWS Network Load Balancer, VPN CloudHub, AWS Local Zones, AWS Transit Gateway network manager (Centrally manage global networks)
[Azure]:Virtual Network(provide services for building networks within Azure.),Subnets (network resources can be grouped by subnet for organisation and security.), Network Interface (Each virtual machine can be assigned one or more network interfaces (NICs)), Network Security Groups (NSG: contains a set of prioritised ACL rules that explicitly grant or deny access), Azure VPN Gateway ( allows connectivity to on-premise networks), Azure DNS, Traffic Manager (DNS based traffic routing solution.), ExpressRoute (provides connections up to 10 Gbps to Azure services over a dedicated fibre connection), Azure Load Balancer, Network Peering, Azure Stack (Azure Stack allows organisations to use Azure services running in private data centers.), Azure Load Balancer , Azure Log Analytics, Azure DNS,
[Google]:Cloud Virtual Network, Subnets, Network Interface, Protocol fowarding, Cloud VPN, Cloud DNS, Virtual Private Network, Cloud Interconnect, CDN interconnect, Cloud DNS, Stackdriver, Google Cloud Load Balancing,
Tags:#VPC, #Subnets, #ACL, #VPNGateway, #CloudVPN, #NetworkInterface, #ENI, #RouteTables, #NSG, #NetworkACL, #InternetGateway, #NatGateway, #ExpressRoute, #CloudInterConnect, #StackDriver
Differences: Subnets group related resources, however, unlike AWS and Azure, Google do not constrain the private IP address ranges of subnets to the address space of the parent network. Like Azure, Google has a built in internet gateway that can be specified from routing rules.

24

Category: Management
Description: A unified management console that simplifies building, deploying, and operating your cloud resources.
References:
[AWS]: AWS Management Console, Trusted Advisor, AWS Usage and Billing Report, AWS Application Discovery Service, Amazon EC2 Systems Manager, AWS Personal Health Dashboard, AWS Compute Optimizer (Identify optimal AWS Compute resources)
[Azure]:Azure portal, Azure Advisor, Azure Billing API, Azure Migrate, Azure Monitor, Azure Resource Health
[Google]:Google CLoud Platform, Cost Management, Security Command Center, StackDriver
Tags: #AWSConsole, #AzurePortal, #GoogleCloudConsole, #TrustedAdvisor, #AzureMonitor, #SecurityCommandCenter
Differences: AWS Console categorizes its Infrastructure as a Service offerings into Compute, Storage and Content Delivery Network (CDN), Database, and Networking to help businesses and individuals grow. Azure excels in the Hybrid Cloud space allowing companies to integrate onsite servers with cloud offerings. Google has a strong offering in containers, since Google developed the Kubernetes standard that AWS and Azure now offer. GCP specializes in high compute offerings like Big Data, analytics and machine learning. It also offers considerable scale and load balancing – Google knows data centers and fast response time.

25

Category: DevOps and application monitoring
Description: Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments; Cloud services for collaborating on code development; Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services; Fully managed build service that supports continuous integration and deployment.
References:
[AWS]:AWS CodePipeline(orchestrates workflow for continuous integration, continuous delivery, and continuous deployment), AWS CloudWatch (monitor your AWS resources and the applications you run on AWS in real time. ), AWS X-Ray (application performance management service that enables a developer to analyze and debug applications in aws), AWS CodeDeploy (automates code deployments to Elastic Compute Cloud (EC2) and on-premises servers. ), AWS CodeCommit ( source code storage and version-control service), AWS Developer Tools, AWS CodeBuild (continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. ), AWS Command Line Interface (unified tool to manage your AWS services), AWS OpsWorks (Chef-based), AWS CloudFormation ( provides a common language for you to describe and provision all the infrastructure resources in your cloud environment.), Amazon CodeGuru (for automated code reviews and application performance recommendations)
[Azure]:Azure Monitor, Azure DevOps, Azure Developer Tools, Azure CLI Azure PowerShell, Azure Automation, Azure Resource Manager , VM extensions , Azure Automation
[Google]:DevOps Solutions (Infrastructure as code, Configuration management, Secrets management, Serverless computing, Continuous delivery, Continuous integration , Stackdriver (combines metrics, logs, and metadata from all of your cloud accounts and projects into a single comprehensive view of your environment)
Tags: #CloudWatch, #StackDriver, #AzureMonitor, #AWSXray, #AWSCodeDeploy, #AzureDevOps, #GoogleDevopsSolutions
Differences: CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. Azure DevOps provides unlimited private Git hosting, cloud build for continuous integration, agile planning, and release management for continuous delivery to the cloud and on-premises. Includes broad IDE support.

SageMakerAzure Machine Learning Studio

A collaborative, drag-and-drop tool to build, test, and deploy predictive analytics solutions on your data.

Alexa Skills KitMicrosoft Bot Framework

Build and connect intelligent bots that interact with your users using text/SMS, Skype, Teams, Slack, Office 365 mail, Twitter, and other popular services.

Amazon LexSpeech Services

API capable of converting speech to text, understanding intent, and converting text back to speech for natural responsiveness.

Amazon LexLanguage Understanding (LUIS)

Allows your applications to understand user commands contextually.

Amazon Polly, Amazon Transcribe | Azure Speech Services

Enables both Speech to Text, and Text into Speech capabilities.
The Speech Services are the unification of speech-to-text, text-to-speech, and speech-translation into a single Azure subscription. It’s easy to speech enable your applications, tools, and devices with the Speech SDK, Speech Devices SDK, or REST APIs.
Amazon Polly is a Text-to-Speech (TTS) service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. With dozens of lifelike voices across a variety of languages, you can select the ideal voice and build speech-enabled applications that work in many different countries.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. Using the Amazon Transcribe API, you can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech.

Amazon RekognitionCognitive Services

Computer Vision: Extract information from images to categorize and process visual data.
Amazon Rekognition is a simple and easy to use API that can quickly analyze any image or video file stored in Amazon S3. Amazon Rekognition is always learning from new data, and we are continually adding new labels and facial recognition features to the service.

Face: Detect, identy, and analyze faces in photos.

Emotions: Recognize emotions in images.

Alexa Skill SetAzure Virtual Assistant

The Virtual Assistant Template brings together a number of best practices we’ve identified through the building of conversational experiences and automates integration of components that we’ve found to be highly beneficial to Bot Framework developers.

Big data and analytics

Data warehouse

AWS RedshiftSQL Data Warehouse

Cloud-based Enterprise Data Warehouse (EDW) that uses Massively Parallel Processing (MPP) to quickly run complex queries across petabytes of data.

Big data processing EMR | Azure Databricks
Apache Spark-based analytics platform.

EMR HDInsight

Managed Hadoop service. Deploy and manage Hadoop clusters in Azure.

Data orchestration / ETL

AWS Data Pipeline, AWS Glue | Data Factory

Processes and moves data between different compute and storage services, as well as on-premises data sources at specified intervals. Create, schedule, orchestrate, and manage data pipelines.

AWS GlueData Catalog

A fully managed service that serves as a system of registration and system of discovery for enterprise data sources

Analytics and visualization

AWS Kinesis Analytics | Stream Analytics

Data Lake Analytics | Data Lake Store

Storage and analysis platforms that create insights from large quantities of data, or data that originates from many sources.

QuickSightPower BI

Business intelligence tools that build visualizations, perform ad hoc analysis, and develop business insights from data.

CloudSearchAzure Search

Delivers full-text search and related search analytics and capabilities.

Amazon AthenaAzure Data Lake Analytics

Provides a serverless interactive query service that uses standard SQL for analyzing databases.

Compute

Virtual servers

Elastic Compute Cloud (EC2)Azure Virtual Machines

Virtual servers allow users to deploy, manage, and maintain OS and server software. Instance types provide combinations of CPU/RAM. Users pay for what they use with the flexibility to change sizes.

AWS BatchAzure Batch

Run large-scale parallel and high-performance computing applications efficiently in the cloud.

AWS Auto ScalingVirtual Machine Scale Sets

Allows you to automatically change the number of VM instances. You set defined metric and thresholds that determine if the platform adds or removes instances.

VMware Cloud on AWSAzure VMware by CloudSimple

Redeploy and extend your VMware-based enterprise workloads to Azure with Azure VMware Solution by CloudSimple. Keep using the VMware tools you already know to manage workloads on Azure without disrupting network, security, or data protection policies.

Containers and container orchestrators

EC2 Container Service (ECS), FargateAzure Container Instances

Azure Container Instances is the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or adopt a higher-level orchestration service.

EC2 Container RegistryAzure Container Registry

Allows customers to store Docker formatted images. Used to create all types of container deployments on Azure.

Elastic Container Service for Kubernetes (EKS)Azure Kubernetes Service (AKS)

Deploy orchestrated containerized applications with Kubernetes. Simplify monitoring and cluster management through auto upgrades and a built-in operations console.

App MeshService Fabric Mesh

Fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking.
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.

Serverless

AWS Lambda | Azure Functions

Integrate systems and run backend processes in response to events or schedules without provisioning or managing servers.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code

Database

Relational database

AWS RDS | SQL Database Azure Database for MySQL Azure Database for PostgreSQL

Managed relational database service where resiliency, scale, and maintenance are primarily handled by the platform.
Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications. Administration processes like patching the database software, backing up databases and enabling point-in-time recovery are managed automatically. Scaling storage and compute resources can be performed by a single API call as AWS does not offer an ssh connection to RDS instances.

NoSQL / Document

DynamoDB and SimpleDBAzure Cosmos DB

A globally distributed, multi-model database that natively supports multiple data models: key-value, documents, graphs, and columnar.

Caching

AWS ElastiCache | Azure Cache for Redis

An in-memory–based, distributed caching service that provides a high-performance store typically used to offload non transactional work from a database.
Amazon ElastiCache is a fully managed in-memory data store and cache service by Amazon Web Services. The service improves the performance of web applications by retrieving information from managed in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.

Database migration

AWS Database Migration ServiceAzure Database Migration Service

Migration of database schema and data from one database format to a specific database technology in the cloud.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

DevOps and application monitoring

AWS CloudWatch, AWS X-Ray | Azure Monitor

Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.
Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.
AWS X-Ray is an application performance management service that enables a developer to analyze and debug applications in the Amazon Web Services (AWS) public cloud. A developer can use AWS X-Ray to visualize how a distributed application is performing during development or production, and across multiple AWS regions and accounts.

AWS CodeDeploy, AWS CodeCommit, AWS CodePipeline | Azure DevOps

A cloud service for collaborating on code development.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
AWS CodeCommit is a source code storage and version-control service for Amazon Web Services’ public cloud customers. CodeCommit was designed to help IT teams collaborate on software development, including continuous integration and application delivery.

AWS Developer ToolsAzure Developer Tools

Collection of tools for building, debugging, deploying, diagnosing, and managing multiplatform scalable apps and services.
The AWS Developer Tools are designed to help you build software like Amazon. They facilitate practices such as continuous delivery and infrastructure as code for serverless, containers, and Amazon EC2.

AWS CodeBuild | Azure DevOps

Fully managed build service that supports continuous integration and deployment.

AWS Command Line Interface | Azure CLI Azure PowerShell

Built on top of the native REST API across all cloud services, various programming language-specific wrappers provide easier ways to create solutions.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

AWS OpsWorks (Chef-based)Azure Automation

Configures and operates applications of all shapes and sizes, and provides templates to create and manage a collection of resources.
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.

AWS CloudFormation | Azure Resource Manager , VM extensions , Azure Automation

Provides a way for users to automate the manual, long-running, error-prone, and frequently repeated IT tasks.
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

Networking

Area

Cloud virtual networking, Virtual Private Cloud (VPC) | Virtual Network

Provides an isolated, private environment in the cloud. Users have control over their virtual networking environment, including selection of their own IP address range, creation of subnets, and configuration of route tables and network gateways.

Cross-premises connectivity

AWS VPN Gateway | Azure VPN Gateway

Connects Azure virtual networks to other Azure virtual networks, or customer on-premises networks (Site To Site). Allows end users to connect to Azure services through VPN tunneling (Point To Site).

DNS management

AWS Route 53 | Azure DNS

Manage your DNS records using the same credentials and billing and support contract as your other Azure services

Route 53 | Traffic Manager

A service that hosts domain names, plus routes users to Internet applications, connects user requests to datacenters, manages traffic to apps, and improves app availability with automatic failover.

Dedicated network

AWS Direct Connect | ExpressRoute

Establishes a dedicated, private network connection from a location to the cloud provider (not over the Internet).

Load balancing

AWS Network Load Balancer | Azure Load Balancer

Azure Load Balancer load-balances traffic at layer 4 (TCP or UDP).

Application Load Balancer | Application Gateway

Application Gateway is a layer 7 load balancer. It supports SSL termination, cookie-based session affinity, and round robin for load-balancing traffic.

Internet of things (IoT)

AWS IoT | Azure IoT Hub

A cloud gateway for managing bidirectional communication with billions of IoT devices, securely and at scale.

AWS Greengrass | Azure IoT Edge

Deploy cloud intelligence directly on IoT devices to run in on-premises scenarios.

Kinesis Firehose, Kinesis Streams | Event Hubs

Services that allow the mass ingestion of small data inputs, typically from devices and sensors, to process and route the data.

AWS IoT Things Graph | Azure Digital Twins

Azure Digital Twins is an IoT service that helps you create comprehensive models of physical environments. Create spatial intelligence graphs to model the relationships and interactions between people, places, and devices. Query data from a physical space rather than disparate sensors.

Management

Trusted Advisor | Azure Advisor

Provides analysis of cloud resource configuration and security so subscribers can ensure they’re making use of best practices and optimum configurations.

AWS Usage and Billing Report | Azure Billing API

Services to help generate, monitor, forecast, and share billing data for resource usage by time, organization, or product resources.

AWS Management Console | Azure portal

A unified management console that simplifies building, deploying, and operating your cloud resources.

AWS Application Discovery Service | Azure Migrate

Assesses on-premises workloads for migration to Azure, performs performance-based sizing, and provides cost estimations.

Amazon EC2 Systems Manager | Azure Monitor

Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.

AWS Personal Health Dashboard | Azure Resource Health

Provides detailed information about the health of resources as well as recommended actions for maintaining resource health.

Security, identity, and access

Authentication and authorization

Identity and Access Management (IAM) | Azure Active Directory

Allows users to securely control access to services and resources while offering data security and protection. Create and manage users and groups, and use permissions to allow and deny access to resources.

Identity and Access Management (IAM) | Azure Role Based Access Control

Role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

AWS Organizations | Azure Subscription Management + Azure RBAC

Security policy and role management for working with multiple accounts.

Multi-Factor Authentication | Multi-Factor Authentication

Safeguard access to data and applications while meeting user demand for a simple sign-in process.

AWS Directory Service | Azure Active Directory Domain Services

Provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory.

Cognito | Azure Active Directory B2C

A highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities.

AWS Organizations | Azure Policy

Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements.

AWS Organizations | Management Groups

Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale, no matter what type of subscriptions you have.

Encryption

Server-side encryption with Amazon S3 Key Management Service | Azure Storage Service Encryption

Helps you protect and safeguard your data and meet your organizational security and compliance commitments.

Key Management Service AWS KMS, CloudHSM | Key Vault

Provides security solution and works with other services by providing a way to manage, create, and control encryption keys stored in hardware security modules (HSM).

Firewall

Web Application Firewall | Application Gateway – Web Application Firewall

A firewall that protects web applications from common web exploits.

Web Application Firewall | Azure Firewall

Provides inbound protection for non-HTTP/S protocols, outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.

Security

Inspector | Security Center

An automated security assessment service that improves the security and compliance of applications. Automatically assess applications for vulnerabilities or deviations from best practices.

Certificate Manager | App Service Certificates available on the Portal

Service that allows customers to create, manage, and consume certificates seamlessly in the cloud.

GuardDuty | Azure Advanced Threat Protection

Detect and investigate advanced attacks on-premises and in the cloud.

AWS Artifact | Service Trust Portal

Provides access to audit reports, compliance guides, and trust documents from across cloud services.

AWS Shield | Azure DDos Protection Service

Provides cloud services with protection from distributed denial of services (DDoS) attacks.

Storage

Object storage

Simple Storage Services (S3) | Azure Blob storage

Object storage service, for use cases including cloud applications, content distribution, backup, archiving, disaster recovery, and big data analytics.

Virtual server disks

Elastic Block Store (EBS) | Azure managed disks

SSD storage optimized for I/O intensive read/write operations. For use as high-performance Azure virtual machine storage.

Shared files

Elastic File System | Azure Files

Provides a simple interface to create and configure file systems quickly, and share common files. Can be used with traditional protocols that access files over a network.

Archiving and backup

S3 Infrequent Access (IA) | Azure Storage cool tier

Cool storage is a lower-cost tier for storing data that is infrequently accessed and long-lived.

S3 Glacier | Azure Storage archive access tier

Archive storage has the lowest storage cost and higher data retrieval costs compared to hot and cool storage.

AWS Backup | Azure Backup

Back up and recover files and folders from the cloud, and provide offsite protection against data loss.

Hybrid storage

Storage Gateway | StorSimple

Integrates on-premises IT environments with cloud storage. Automates data management and storage, plus supports disaster recovery.

Bulk data transfer

AWS Import/Export Disk | Import/Export

A data transport solution that uses secure disks and appliances to transfer large amounts of data. Also offers data protection during transit.

AWS Import/Export Snowball, Snowball Edge, Snowmobile | Azure Data Box

Petabyte- to exabyte-scale data transport solution that uses secure data storage devices to transfer large amounts of data to and from Azure.

Web applications

Elastic Beanstalk | App Service

Managed hosting platform providing easy to use services for deploying and scaling web applications and services.

API Gateway | API Management

A turnkey solution for publishing APIs to external and internal consumers.

CloudFront | Azure Content Delivery Network

A global content delivery network that delivers audio, video, applications, images, and other files.

Global Accelerator | Azure Front Door

Easily join your distributed microservice architectures into a single global application using HTTP load balancing and path-based routing rules. Automate turning up new regions and scale-out with API-driven global actions, and independent fault-tolerance to your back end microservices in Azure—or anywhere.

Miscellaneous

Backend process logic

AWS Step Functions | Logic Apps

Cloud technology to build distributed applications using out-of-the-box connectors to reduce integration challenges. Connect apps, data and devices on-premises or in the cloud.

Enterprise application services

Amazon WorkMail, Amazon WorkDocs | Office 365

Fully integrated Cloud service providing communications, email, document management in the cloud and available on a wide variety of devices.

Gaming

GameLift, GameSparks | PlayFab

Managed services for hosting dedicated game servers.

Media transcoding

Elastic Transcoder | Media Services

Services that offer broadcast-quality video streaming services, including various transcoding technologies.

Workflow

Simple Workflow Service (SWF) | Logic Apps

Serverless technology for connecting apps, data and devices anywhere, whether on-premises or in the cloud for large ecosystems of SaaS and cloud-based connectors.

Hybrid

Outposts | Azure Stack

Azure Stack is a hybrid cloud platform that enables you to run Azure services in your company’s or service provider’s datacenter. As a developer, you can build apps on Azure Stack. You can then deploy them to either Azure Stack or Azure, or you can build truly hybrid apps that take advantage of connectivity between an Azure Stack cloud and Azure.

How does a business decide between Microsoft Azure or AWS?

Basically, it all comes down to what your organizational needs are and if there’s a particular area that’s especially important to your business (ex. serverless, or integration with Microsoft applications).

Some of the main things it comes down to is compute options, pricing, and purchasing options.

Here’s a brief comparison of the compute option features across cloud providers:

Here’s an example of a few instances’ costs (all are Linux OS):

Each provider offers a variety of options to lower costs from the listed On-Demand prices. These can fall under reservations, spot and preemptible instances and contracts.

Both AWS and Azure offer a way for customers to purchase compute capacity in advance in exchange for a discount: AWS Reserved Instances and Azure Reserved Virtual Machine Instances. There are a few interesting variations between the instances across the cloud providers which could affect which is more appealing to a business.

Another discounting mechanism is the idea of spot instances in AWS and low-priority VMs in Azure. These options allow users to purchase unused capacity for a steep discount.

With AWS and Azure, enterprise contracts are available. These are typically aimed at enterprise customers, and encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – for example, AWS EDPs and Azure Enterprise Agreements.

You can read more about the differences between AWS and Azure to help decide which your business should use in this blog post

Source: AWS to Azure services comparison – Azure Architecture

AWS Certification Exams Prep: Serverless Facts and Summaries and Question and Answers

AWS Serverless

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Serverless – Facts and summaries, Top 20 AWS Serverless Questions and Answers Dump

Definition 1: Serverless computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It can be a form of utility computing.
Definition 2: AWS Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.

AWS Serverless Facts and summaries

AWS Serverless explained graphically
AWS Serverless explained graphically


AWS Serverless explained graphically
AWS Serverless explained graphically

AWS Serverless explained graphically
AWS Serverless explained graphically

AWS Serverless explained in pictures
AWS Serverless explained in pictures


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)
  1. The AWS Serverless Application Model (AWS SAM) is a model to define serverless applications. AWS SAM is natively supported by AWS CloudFormation and provides a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.
  2. You can use AWS CodePipeline with the AWS Serverless Application Model to automate building, testing, and deploying serverless applications. AWS CodeBuild integrates with CodePipeline to provide automated builds. You can use AWS CodeDeploy to gradually roll out and test new Lambda function versions.
  3. You can monitor and troubleshoot the performance of your serverless applications and AWS Lambda functions with AWS services and third-party tools. Amazon CloudWatch helps you see real-time reporting metrics and logs for your serverless applications. You can use AWS X-Ray to debug and trace your serverless applications and AWS Lambda.
  4. The AWS Serverless Application Repository is a managed repository for serverless applications. It enables teams, organizations, and individual developers to store and share reusable applications, and easily assemble and deploy serverless architectures in powerful new ways. Using the Serverless Application Repository, you don’t need to clone, build, package, or publish source code to AWS before deploying it. Instead, you can use pre-built applications from the Serverless Application Repository in your serverless architectures, helping you and your teams reduce duplicated work, ensure organizational best practices, and get to market faster.
  5. Anyone with an AWS account can publish a serverless application to the Serverless Application Repository. Applications can be privately shared with specific AWS accounts. Applications that are shared publicly include a link to the application’s source code so others can view what the application does and how it works.
  6. What kinds of applications are available in the AWS Serverless Application Repository? The AWS Serverless Application Repository includes applications for Alexa Skills, chatbots, data processing, IoT, real time stream processing, web and mobile back-ends, social media trend analysis, image resizing, and more from publishers on AWS.
  7. The AWS Serverless Application Repository enables developers to publish serverless applications developed in a GitHub repository. Using AWS CodePipeline to link a GitHub source with the AWS Serverless Application Repository can make the publishing process even easier, and the process can be set up in minutes.
  8. What two arguments does a Python Lambda handler function require?
    Event, Context
  9. A Lambda deployment package contains Function code and libraries not included within the runtime environment
  10. When referencing the remaining time left for a Lambda function to run within the function’s code you would use The context object.
  11. Long-running memory-intensive workloads is LEAST suited to AWS Lambda
  12. The maximum execution duration of your Lambda functions is Fifteen Minutes
  13. Logs for Lambda functions are Stored in AWS CloudWatch
  14. Docker Container Images are constructed using instructions in a file called Dockerfile
  15. The ECS Task Agent Is responsible for starting and stopping tasks. It runs inside the EC2 instance and reports on information like running tasks and resource utilization
  16. AWS ECR Stores Container Images.
  17. Elastic Beanstalk is used to Deploy and scale web applications and services developed with a supported platform
  18. When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?
    An Elastic Load Balancer, an S3 Bucket, an Ec2 instance.
  19. When using Elastic Beanstalk you can deploy your web applications by:

    • Configuring a git repository with Elastic Beanstalk so that changes will be detected and your application will be updated.
    • Uploading code files to the Elastic Beanstalk service

Top
Reference: AWS Serverless

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

AWS LAMBDA EXPLAINED GRAPHICALLY:

AWS LAMBDA EXPLAINED GRAPHICALLY
AWS LAMBDA EXPLAINED GRAPHICALLY

aws LAMBDA explained graphically
aws LAMBDA explained graphically

aws LAMBDA explained graphically
aws LAMBDA explained graphically

AWS Serverless: Top 20 Questions and Answers Dump

Q00: You have created a serverless application which converts text in to speech using a combination of S3, API Gateway, Lambda, Polly, DynamoDB and SNS. Your users complain that only some text is being converted, whereas longer amounts of text does not get converted. What could be the cause of this problem?

  • A. Polly has built in censorship, so if you try and send it text that is deemed offensive, it will not generate an MP3.
  • B. You’ve placed your DynamoDB table in a single availability zone, which is currently down, causing an outage.
  • C. Your lambda function needs a longer execution time. You should check how long is needed in the fringe cases and increase the timeout inside the function to slightly longer than that.
  • D. AWS X-ray service is interfering with the application and should be disabled.

Answer: C

Reference: AWS Lambda limits

Top

Q1: How does API Gateway deal with legacy SOAP applications?

  • A. Converts the response from the application to REST
  • B. Converts the response from the application to HTML
  • C. Provides webservice passthrough for SOAP applications
  • D. Converts the response from the application to XML

Answer: C
SOAP Applications send their responses in XML format. API Gateway supports SOAP applications, but only provides passthrough. API Gateway does not transform or convert the responses.
Reference: How to configure Amazon API Gateway as a SOAP webservice passthrough in minutes

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Top

Q3: You have launched a new web application on AWS using API Gateway, Lambda and S3. Someone post a thread to reddit about your application and it starts to go viral.
Your start receiving 100000 requests every second and you notice that most requests are similar.
Your web application begins to struggle. What can you do to optimize performance of your application?

  • A. Enable API Gateway Accelerator
  • B. Enable API Gateway caching to cache frequent requests.
  • C. Change your route53 allias record to point to AWS Neptune and then configure Neptune to filter your API requests to genuine requests only.
  • D. Migrate your API Gateway to an Network Load Balancer and enable session stickiness for all sessions.


Answer: B.

Reference: Amazon API Gateway FAQs

Top

Q4: Which of the following services does X-ray integrate with? (Choose 3)

  • A. Elastic Load Balancer
  • B. Lambda
  • C. S3
  • D. API Gateway

Answer: A. B. and D.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.
You can use X-Ray with applications running on EC2, ECS, Lambda, and Elastic Beanstalk. In addition, the X-Ray SDK automatically captures metadata for API calls made to AWS services using the AWS SDK. In addition, the X-Ray SDK provides add-ons for MySQL and PostgreSQL drivers.

Reference: AWS X-Ray

Top

Q5: You are a developer for a busy real estate company and you want to enable other real estate agents to the
ability to show properties on your books, but skinned so that it looks like their own website. You decide the most efficient way to do this is to
expose your API to the public. The project works well, however one of your competitors starts abusing this, sending your API tens of thousands
of requests per second. This generates a HTTP 429 error. Each agent connects to your API using individual API Keys. What actions can you take to stop this behavior?

  • A. Use AWS Shield Advanced API protection to block the requests.
  • B. Deploy multiple API Gateways and give the agent access to another API Gateway.
  • C. Place an AWS Web Application Firewall in front of API gateway and filter requests.
  • D. Throttle the agents API access using the individual API Keys

Answer: D.
Throttling ensures that API traffic is controlled to help your backend services maintain performance and availability.
How can I protect my backend systems and applications from traffic spikes?
Amazon API Gateway provides throttling at multiple levels including global and by service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response.

Reference: Amazon API Gateway FAQs

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Top

Q6: You are developing a new application using serverless infrastructure and are using services such as S3, DynamoDB, Lambda, API Gateway, CloudFront, CloudFormation and Polly.
You deploy your application to production and your end users begin complaining about receiving a HTTP 429 error. What could be the cause of the error?

  • A. You enabled API throttling for a rate limit of 1000 requests per second while in development and now that you have deployed to production your API Gateway is being throttled.
  • B. Your cloudFormation stack is not valid and is failling to deploy properly which is causing a HTTP 429 error.
  • C. Your lambda function does not have sufficient permissions to read to DynamoDB and this is generating a HTTP 429 error.
  • D. You have an S3 bucket policy which is preventing lambda from being able to write tyo your bucket, generating a HTTP 429 error.

Answer: A.
Amazon API Gateway provides throttling at multiple levels including global and by service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests per second. Any requests over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response.

Reference: Amazon API Gateway FAQs

Top

Q7: What is the format of structured notification messages sent by Amazon SNS?

  • A. An XML object containing MessageId, UnsubscribeURL, Subject, Message and other
    values
  • B. An JSON object containing MessageId, DuplicateFlag, Message and other values
  • C. An XML object containing MessageId, DuplicateFlag, Message and other values
  • D. An JSON object containing MessageId, unsubscribeURL, Subject, Message and other
    values


Answer: D.

The notification message sent by Amazon SNS for deliveries over HTTP, HTTPS, Email-JSON and SQS transport protocols will consist of a simple JSON object, which will include the following information: MessageId: A Universally Unique Identifier, unique for each notification published.Reference: Format of structured notification messages sent by Amazon SNS


Top

Other AWS Facts and Summaries and Questions/Answers Dump

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers

AWS Certification Exam Preparation

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers

AWS Developer – Deployment Theory Facts and summaries, Top 80 AWS Developer  DVA-C02 Theory Questions and Answers Dump

Definition 1: The AWS Developer is responsible for designing, deploying, and developing cloud applications on AWS platform

Definition 2: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software.

The AWS Certified Developer Associate certification is a widely recognized certification that validates a candidate’s expertise in developing and maintaining applications on the Amazon Web Services (AWS) platform.

The certification is about to undergo a major change with the introduction of the new exam version DVA-C02, replacing the current DVA-C01. In this article, we will discuss the differences between the two exams and what candidates should consider in terms of preparation for the new DVA-C02 exam.

Quick facts


  • What’s happening?
  • The DVA-C01 exam is being replaced by the DVA-C02 exam.
  • When is this taking place?
  • The last day to take the current exam is February 27th, 2023 and the first day to take the new exam is February 28th, 2023.
  • What’s the difference?
  • The new exam features some new AWS services and features.

Main differences between DVA-C01 and DVA-C02

The table below details the differences between the DVA-C01 and DVA-C02 exams domains and weightings:

In terms of the exam content weightings, the DVA-C02 exam places a greater emphasis on deployment and management, with a slightly reduced emphasis on development and refactoring. This shift reflects the increased importance of operations and management in cloud computing, as well as the need for developers to have a strong understanding of how to deploy and maintain applications on the AWS platform.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

One major difference between the two exams is the focus on the latest AWS services and features. The DVA-C02 exam covers around 57 services vs only 33 services in the DVA-C01. This reflects the rapidly evolving AWS ecosystem and the need for developers to be up-to-date with the latest services and features in order to effectively build and maintain applications on the platform.

Click the image above to watch our video about the NEW AWS Developer Associate Exam DVA-C02 from our youtube channel

Training resources for the AWS Developer Associate

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

In terms of preparation for the DVA-C02 exam, we strongly recommend enrolling in our on-demand training courses for the AWS Developer Associate certification. It is important for candidates to familiarize themselves with the latest AWS services and features, as well as the updated exam content weightings. Practical experience working with AWS services and hands-on experimentation with new services and features will be key to success on the exam. Candidates should also focus on their understanding of security best practices, access control, and compliance, as these topics will carry a greater weight in the new exam.

Frequently asked questions – FAQs:

In conclusion, the change from the DVA-C01 to the DVA-C02 exam represents a major shift in the focus and content of the AWS Certified Developer Associate certification. Candidates preparing for the new exam should focus on familiarizing themselves with the latest AWS services and features, as well as the updated exam content weightings, and placing a strong emphasis on security, governance, and compliance.

With the right preparation and focus, candidates can successfully navigate the changes in the DVA-C02 exam and maintain their status as a certified AWS Developer Associate.

Download AWS Certified Developer Associate Mock Exam Pro App for:

AWS Developer and Deployment Theory:  Facts and Summaries and Questions/Answers
aws certified developer associate exam prep

All Platforms (PWA) –  Android –  iOSWindows 10 

AWS Developer and Deployment Theory Facts and summaries

AWS Developer and Deployment Theory:  Facts and Summaries and Questions/Answers
AWS Developer Associate DVA-C02 Exam Prep

    1. Continuous Integration is about integrating or merging the code changes frequently, at least once per day. It enables multiple devs to work on the same application.
    2. Continuous delivery is all about automating the build, test, and deployment functions.
    3. Continuous Deployment fully automates the entire release process, code is deployed into Production as soon as it has successfully passed through the release pipeline.
    4. AWS CodePipeline is a continuous integration/Continuous delivery service:
      • It automates your end-to-end software release process based on user defines workflow
      • It can be configured to automatically trigger your pipeline as soon as a change is detected in your source code repository
      • It integrates with other services from AWS like CodeBuild and CodeDeploy, as well as third party custom plug-ins.
    5. AWS CodeBuild is a fully managed build service. It can build source code, run tests and produce software packages based on commands that you define yourself.
    6. Dy default the buildspec.yml defines the build commands and settings used by CodeBuild to run your build.
    7. AWS CodeDeploy is a fully managed automated deployment service and can be used as part of a Continuous Delivery or Continuous Deployment process.
    8. There are 2 types of deployment approach:
      • In-place or Rolling update- you stop the application on each host and deploy the latest code. EC2 and on premise systems only. To roll back, you must re-deploy the previous version of the application.
      • Blue/Green : New instances are provisioned and the new application is deployed to these new instances. Traffic is routed to the new instances according to your own schedule. Supported for EC2, on-premise systems and Lambda functions. Rollback is easy, just route the traffic back to the original instances. Blue is active deployment, green is new release.
    9. Docker allows you to package your software into Containers which you can run in Elastic Container Service (ECS)
    10.  A docker Container includes everything the software needs to run including code, libraries, runtime and environment variables etc..
    11.  A special file called Dockerfile is used to specify the instructions needed to assemble your Docker image.
    12.  Once built, Docker images can be stored in Elastic Container Registry (ECR) and ECS can then use the image to launch Docker Containers.
    13. AWS CodeCommit is based on Git. It provides centralized repositories for all your code, binaries, images, and libraries.
    14. CodeCommit tracks and manages code changes. It maintains version history.
    15. CodeCommit manages updates from multiple sources and enables collaboration.
    16. To support CORS, API resource needs to implement an OPTIONS method that can respond to the OPTIONS preflight request with following headers:

      • Access-Control-Allow-Headers
      • Access-Control-Allow-Origin
      • Access-Control-Allow-Methods

    17. You have a legacy application that works via XML messages. You need to place the application behind the API gateway in order for customers to make API calls. Which of the following would you need to configure?
      You will need to work with the Request and Response Data mapping.
    18. Your application currently points to several Lambda functions in AWS. A change is being made to one of the Lambda functions. You need to ensure that application traffic is shifted slowly from one Lambda function to the other. Which of the following steps would you carry out?
      • Create an ALIAS with the –routing-config parameter
      • Update the ALIAS with the –routing-config parameter

      By default, an alias points to a single Lambda function version. When the alias is updated to point to a different function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias to any potential instabilities introduced by the new version. To minimize this impact, you can implement the routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda function and dictate what percentage of incoming traffic is sent to each version.

    19. AWS CodeDeploy: The AppSpec file defines all the parameters needed for the deployment e.g. location of application files and pre/post deployment validation tests to run.
    20. For Ec2 / On Premise systems, the appspec.yml file must be placed in the root directory of your revision (the same folder that contains your application code). Written in YAML.
    21. For Lambda and ECS deployment, the AppSpec file can be YAML or JSON
    22. Visual workflows are automatically created when working with which Step Functions
    23. API Gateway stages store configuration for deployment. An API Gateway Stage refers to A snapshot of your API
    24. AWS SWF Services SWF guarantees delivery order of messages/tasks
    25. Blue/Green Deployments with CodeDeploy on AWS Lambda can happen in multiple ways. Which of these is a potential option? Linear, All at once, Canary
    26. X-Ray Filter Expressions allow you to search through request information using characteristics like URL Paths, Trace ID, Annotations
    27. S3 has eventual consistency for overwrite PUTS and DELETES.
    28. What can you do to ensure the most recent version of your Lambda functions is in CodeDeploy?
      Specify the version to be deployed in AppSpec file.

      https://docs.aws.amazon.com/codedeploy/latest/userguide/application-specification-files.htmlAppSpec Files on an Amazon ECS Compute Platform

      If your application uses the Amazon ECS compute platform, the AppSpec file can be formatted with either YAML or JSON. It can also be typed directly into an editor in the console. The AppSpec file is used to specify:

      The name of the Amazon ECS service and the container name and port used to direct traffic to the new task set. The functions to be used as validation tests. You can run validation Lambda functions after deployment lifecycle events. For more information, see AppSpec ‘hooks’ Section for an Amazon ECS Deployment, AppSpec File Structure for Amazon ECS Deployments , and AppSpec File Example for an Amazon ECS Deployment .


    Top
    Reference: AWS Developer Tools




    AWS Developer and Deployment Theory: Top 80 Questions and Answers Dump

    Q0: Which AWS service can be used to compile source code, run tests and package code?

    • A. CodePipeline
    • B. CodeCommit
    • C. CodeBuild
    • D. CodeDeploy


    Answer: C.

    Reference: AWS CodeBuild


    Top

    Q1: How can your prevent CloudFormation from deleting your entire stack on failure? (Choose 2)

    • A. Set the Rollback on failure radio button to No in the CloudFormation console
    • B. Set Termination Protection to Enabled in the CloudFormation console
    • C. Use the –disable-rollback flag with the AWS CLI
    • D. Use the –enable-termination-protection protection flag with the AWS CLI


    Answer: A. and C.

    Reference: Protecting a Stack From Being Deleted

    Top

    Q2: Which of the following practices allows multiple developers working on the same application to merge code changes frequently, without impacting each other and enables the identification of bugs early on in the release process?

    • A. Continuous Integration
    • B. Continuous Deployment
    • C. Continuous Delivery
    • D. Continuous Development


    Answer: A

    Reference: What is Continuous Integration?

    Top

    Q3: When deploying application code to EC2, the AppSpec file can be written in which language?

    • A. JSON
    • B. JSON or YAML
    • C. XML
    • D. YAML

    Top

    Q4: Part of your CloudFormation deployment fails due to a mis-configuration, by defaukt what will happen?

    • A. CloudFormation will rollback only the failed components
    • B. CloudFormation will rollback the entire stack
    • C. Failed component will remain available for debugging purposes
    • D. CloudFormation will ask you if you want to continue with the deployment

    Top

    Q5: You want to receive an email whenever a user pushes code to CodeCommit repository, how can you configure this?

    • A. Create a new SNS topic and configure it to poll for CodeCommit eveents. Ask all users to subscribe to the topic to receive notifications
    • B. Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository.
    • C. Configure Notifications in the console, this will create a CloudWatch events rule to send a notification to a SNS topic which will trigger an email to be sent to the user.
    • D. Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository.


    Answer: C

    Reference: Getting Started with Amazon SNS

    Top

    Q6: Which AWS service can be used to centrally store and version control your application source code, binaries and libraries

    • A. CodeCommit
    • B. CodeBuild
    • C. CodePipeline
    • D. ElasticFileSystem


    Answer: A

    Reference: AWS CodeCommit

    Top

    Q7: You are using CloudFormation to create a new S3 bucket,
    which of the following sections would you use to define the properties of your bucket?

    • A. Conditions
    • B. Parameters
    • C. Outputs
    • D. Resources


    Answer: D

    Reference: Resources

    Top

    Q8: You are deploying a number of EC2 and RDS instances using CloudFormation. Which section of the CloudFormation template
    would you use to define these?

    • A. Transforms
    • B. Outputs
    • C. Resources
    • D. Instances


    Answer: C.
    The Resources section defines your resources you are provisioning. Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack. Transforms is used to reference code located in S3.

    Reference: Resources

    Top

    Q9: Which AWS service can be used to fully automate your entire release process?

    • A. CodeDeploy
    • B. CodePipeline
    • C. CodeCommit
    • D. CodeBuild


    Answer: B.
    AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates

    Reference: AWS CodePipeline

    Top

    Q10: You want to use the output of your CloudFormation stack as input to another CloudFormation stack. Which sections of the CloudFormation template would you use to help you configure this?

    • A. Outputs
    • B. Transforms
    • C. Resources
    • D. Exports


    Answer: A.
    Outputs is used to output user defines data relating to the resources you have built and can also used as input to another CloudFormation stack.

    Reference: CloudFormation Outputs

    Top

    Q11: You have some code located in an S3 bucket that you want to reference in your CloudFormation template. Which section of the template can you use to define this?

    • A. Inputs
    • B. Resources
    • C. Transforms
    • D. Files


    Answer: C.
    Transforms is used to reference code located in S3 and also specifying the use of the Serverless Application Model (SAM)
    for Lambda deployments.
    Transform:
    Name: ‘AWS::Include’
    Parameters:
    Location: ‘s3://MyAmazonS3BucketName/MyFileName.yaml’

    Reference: Transforms

    Top

    Q12: You are deploying an application to a number of Ec2 instances using CodeDeploy. What is the name of the file
    used to specify source files and lifecycle hooks?

    • A. buildspec.yml
    • B. appspec.json
    • C. appspec.yml
    • D. buildspec.json


    Answer: C.

    Reference: CodeDeploy AppSpec File Reference

    Top

    Q13: Which of the following approaches allows you to re-use pieces of CloudFormation code in multiple templates, for common use cases like provisioning a load balancer or web server?

    • A. Share the code using an EBS volume
    • B. Copy and paste the code into the template each time you need to use it
    • C. Use a cloudformation nested stack
    • D. Store the code you want to re-use in an AMI and reference the AMI from within your CloudFormation template.


    Answer: C.

    Reference: Working with Nested Stacks

    Top

    Q14: In the CodeDeploy AppSpec file, what are hooks used for?

    • A. To reference AWS resources that will be used during the deployment
    • B. Hooks are reserved for future use
    • C. To specify files you want to copy during the deployment.
    • D. To specify, scripts or function that you want to run at set points in the deployment lifecycle


    Answer: D.
    The ‘hooks’ section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts.

    Reference: AppSpec ‘hooks’ Section

    Top

    Q15:You need to setup a RESTful API service in AWS that would be serviced via the following url https://democompany.com/customers Which of the following combination of services can be used for development and hosting of the RESTful service? Choose 2 answers from the options below

    • A. AWS Lambda and AWS API gateway
    • B. AWS S3 and Cloudfront
    • C. AWS EC2 and AWS Elastic Load Balancer
    • D. AWS SQS and Cloudfront

    Answer: A and C
    AWS Lambda can be used to host the code and the API gateway can be used to access the API’s which point to AWS Lambda Alternatively you can create your own API service , host it on an EC2 Instance and then use the AWS Application Load balancer to do path based routing.
    Reference: Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, Amazon S3, Amazon DynamoDB, and Amazon Cognito

    Top

    Q16: As a developer, you have created a Lambda function that is used to work with a bucket in Amazon S3. The Lambda function is not working as expected. You need to debug the issue and understand what’s the underlying issue. How can you accomplish this in an easily understandable way?

    • A. Use AWS Cloudwatch metrics
    • B. Put logging statements in your code
    • C. Set the Lambda function debugging level to verbose
    • D. Use AWS Cloudtrail logs

    Answer: B
    You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with Amazon CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function (/aws/lambda/).
    Reference: Using Amazon CloudWatch

    Top

    Q17: You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this?

    • A. Use AWS Cloudwatch metrics
    • B. Assign a dead letter queue
    • C. Congure SNS notications
    • D. Use AWS Cloudtrail logs

    Answer: B
    Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you’re unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure.
    Reference: AWS Lambda Function Dead Letter Queues

    Top

    Q18: You are developing an application that is going to make use of Amazon Kinesis. Due to the high throughput , you decide to have multiple shards for the streams. Which of the following is TRUE when it comes to processing data across multiple shards?

    • A. You cannot guarantee the order of data across multiple shards. Its possible only within a shard
    • B. Order of data is possible across all shards in a streams
    • C. Order of data is not possible at all in Kinesis streams
    • D. You need to use Kinesis firehose to guarantee the order of data

    Answer: A
    Kinesis Data Streams lets you order records and read and replay records in the same order to many Kinesis Data Streams applications. To enable write ordering, Kinesis Data Streams expects you to call the PutRecord API to write serially to a shard while using the sequenceNumberForOrdering parameter. Setting this parameter guarantees strictly increasing sequence numbers for puts from the same client and to the same partition key.
    Option A is correct as it cannot guarantee the ordering of records across multiple shards.
    Reference: How to perform ordered data replication between applications by using Amazon DynamoDB Streams

    Top

    Q19: You’ve developed a Lambda function and are now in the process of debugging it. You add the necessary print statements in the code to assist in the debugging. You go to Cloudwatch logs , but you see no logs for the lambda function. Which of the following could be the underlying issue for this?

    • A. You’ve not enabled versioning for the Lambda function
    • B. The IAM Role assigned to the Lambda function does not have the necessary permission to create Logs
    • C. There is not enough memory assigned to the function
    • D. There is not enough time assigned to the function


    Answer: B
    “If your Lambda function code is executing, but you don’t see any log data being generated after several minutes, this could mean your execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs. For information about how to make sure that you have set up the execution role correctly to grant these permissions, see Manage Permissions: Using an IAM Role (Execution Role)”.

    Reference: Using Amazon CloudWatch

    Top

    Q20: Your application is developed to pick up metrics from several servers and push them off to Cloudwatch. At times , the application gets client 429 errors. Which of the following can be done from the programming side to resolve such errors?

    • A. Use the AWS CLI instead of the SDK to push the metrics
    • B. Ensure that all metrics have a timestamp before sending them across
    • C. Use exponential backoff in your request
    • D. Enable encryption for the requests

    Answer: C.
    The main reason for such errors is that throttling is occurring when many requests are sent via API calls. The best way to mitigate this is to stagger the rate at which you make the API calls.
    In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. The idea behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. You should implement a maximum delay interval, as well as a maximum number of retries. The maximum delay interval and maximum number of retries are not necessarily fixed values and should be set based on the operation being performed, as well as other local factors, such as network latency.
    Reference: Error Retries and Exponential Backoff in AWS

    Q21: You have been instructed to use the CodePipeline service for the CI/CD automation in your company. Due to security reasons , the resources that would be part of the deployment are placed in another account. Which of the following steps need to be carried out to accomplish this deployment? Choose 2 answers from the options given below

    • A. Dene a customer master key in KMS
    • B. Create a reference Code Pipeline instance in the other account
    • C. Add a cross account role
    • D. Embed the access keys in the codepipeline process

    Answer: A. and C.
    You might want to create a pipeline that uses resources created or managed by another AWS account. For example, you might want to use one account for your pipeline and another for your AWS CodeDeploy resources. To do so, you must create a AWS Key Management Service (AWS KMS) key to use, add the key to the pipeline, and set up account policies and roles to enable cross-account access.
    Reference: Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account

    Top

    Q22: You are planning on deploying an application to the worker role in Elastic Beanstalk. Moreover, this worker application is going to run the periodic tasks. Which of the following is a must have as part of the deployment?

    • A. An appspec.yaml file
    • B. A cron.yaml  file
    • C. A cron.cong file
    • D. An appspec.json file


    Answer: B.
    Create an Application Source Bundle
    When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you’ll need to upload a source bundle. Your source bundle must meet the following requirements:
    Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
    Not exceed 512 MB
    Not include a parent folder or top-level directory (subdirectories are fine)
    If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file. For more information, see Periodic Tasks.

    Reference: Create an Application Source Bundle

    Top

    Q23: An application needs to make use of an SQS queue for working with messages. An SQS queue has been created with the default settings. The application needs 60 seconds to process each message. Which of the following step need to be carried out by the application.

    • A. Change the VisibilityTimeout for each message and then delete the message after processing is completed
    • B. Delete the message and change the visibility timeout.
    • C. Process the message , change the visibility timeout. Delete the message
    • D. Process the message and delete the message

    Answer: A
    If the SQS queue is created with the default settings , then the default visibility timeout is 30 seconds. And since the application needs more time for processing , you first need to change the timeout and delete the message after it is processed.
    Reference: Amazon SQS Visibility Timeout

    Top

    Q24: AWS CodeDeploy deployment fails to start & generate following error code, ”HEALTH_CONSTRAINTS_INVALID”, Which of the following can be used to eliminate this error?

    • A. Make sure the minimum number of healthy instances is equal to the total number of instances in the deployment group.
    • B. Increase the number of healthy instances required during deployment
    • C. Reduce number of healthy instances required during deployment
    • D. Make sure the number of healthy instances is equal to the specified minimum number of healthy instances.

    Answer: C
    AWS CodeDeploy generates ”HEALTH_CONSTRAINTS_INVALID” error, when a minimum number of healthy instances defined in deployment group are not available during deployment. To mitigate this error, make sure required number of healthy instances are available during deployments.
    Reference: Error Codes for AWS CodeDeploy

    Top

    Q25: How are the state machines in AWS Step Functions defined?

    • A. SAML
    • B. XML
    • C. YAML
    • D. JSON

    Answer: D. JSON
    AWS Step Functions state machines are defines in JSON files!
    Reference: What Is AWS Step Functions?

    Top

    Q26:How can API Gateway methods be configured to respond to requests?

    • A. Forwarded to method handlers
    • B. AWS Lambda
    • C. Integrated with other AWS Services
    • D. Existing HTTP endpoints

    Answer: B. C. D.

    Reference: Set up REST API Methods in API Gateway

    Top

    Q27: Which of the following could be an example of an API Gateway Resource URL for a trucks resource?

    • A. https://1a2sb3c4.execute-api.us-east-1.awsapigateway.com/trucks
    • B. https://trucks.1a2sb3c4.execute-api.us-east-1.amazonaws.com
    • C. https://1a2sb3c4.execute-api.amazonaws.com/trucks
    • D. https://1a2sb3c4.execute-api.us-east-1.amazonaws.com/cars

    Answer: C

    Reference: Amazon API Gateway Concepts

    Top

    Q28: API Gateway Deployments are:

    • A. A specific snapshot of your API’s methods
    • B. A specific snapshot of all of your API’s settings, resources, and methods
    • C. A specific snapshot of your API’s resources
    • D. A specific snapshot of your API’s resources and methods

    Answer: D.
    AWS API Gateway Deployments are a snapshot of all the resources and methods of your API and their configuration.
    Reference: Deploying a REST API in Amazon API Gateway

    Top

    Q29: A SWF workflow task or task execution can live up to how long?

    • A. 1 Year
    • B. 14 days
    • C. 24 hours
    • D. 3 days

    Answer: A. 1 Year
    Each workflow execution can run for a maximum of 1 year. Each workflow execution history can grow up to 25,000 events. If your use case requires you to go beyond these limits, you can use features Amazon SWF provides to continue executions and structure your applications using child workflow executions.
    Reference: Amazon SWF FAQs

    Top

    Q30: With AWS Step Functions, all the work in your state machine is done by tasks. These tasks performs work by using what types of things? (Choose the best 3 answers)

    • A. An AWS Lambda Function Integration
    • B. Passing parameters to API actions of other services
    • C. Activities
    • D. An EC2 Integration

    Answer: A. B. C.

    Reference:

    Top

    Q31: How does SWF make decisions?

    • A. A decider program that is written in the language of the developer’s choice
    • B. A visual workflow created in the SWF visual workflow editor
    • C. A JSON-defined state machine that contains states within it to select the next step to take
    • D. SWF outsources all decisions to human deciders through the AWS Mechanical Turk service.

    Answer: A.
    SWF allows the developer to write their own application logic to make decisions and determine how to evaluate incoming data.
    Q: What programming conveniences does Amazon SWF provide to write applications? Like other AWS services, Amazon SWF provides a core SDK for the web service APIs. Additionally, Amazon SWF offers an SDK called the AWS Flow Framework that enables you to develop Amazon SWF-based applications quickly and easily. AWS Flow Framework abstracts the details of task-level coordination with familiar programming constructs. While running your program, the framework makes calls to Amazon SWF, tracks your program’s execution state using the execution history kept by Amazon SWF, and invokes the relevant portions of your code at the right times. By offering an intuitive programming framework to access Amazon SWF, AWS Flow Framework enables developers to write entire applications as asynchronous interactions structured in a workflow. For more details, please see What is the AWS Flow Framework?
    Reference:

    Top

    Q32: In order to effectively build and test your code, AWS CodeBuild allows you to:

    • A. Select and use some 3rd party providers to run tests against your code
    • B. Select a pre-configured environment
    • C. Provide your own custom AMI
    • D. Provide your own custom container image

    Answer:A. B. and D.

    Reference: AWS CodeBuild FAQs

    Top

    Q33: X-Ray Filter Expressions allow you to search through request information using characteristics like:

    • A. URL Paths
    • B. Metadata
    • C. Trace ID
    • D. Annotations

    Top

    Q34: CodePipeline pipelines are workflows that deal with stages, actions, transitions, and artifacts. Which of the following statements is true about these concepts?

    • A. Stages contain at least two actions
    • B. Artifacts are never modified or iterated on when used inside of CodePipeline
    • C. Stages contain at least one action
    • D. Actions will have a deployment artifact as either an input an output or both

    Answer: B. C. D.

    Reference:

    Top

    Q35: When deploying a simple Python web application with Elastic Beanstalk which of the following AWS resources will be created and managed for you by Elastic Beanstalk?

    • A. An Elastic Load Balancer
    • B. An S3 Bucket
    • C. A Lambda Function
    • D. An EC2 instance

    Answer: A. B. and D.
    AWS Elastic Beanstalk uses proven AWS features and services, such as Amazon EC2, Amazon RDS, Elastic Load Balancing, Auto Scaling, Amazon S3, and Amazon SNS, to create an environment that runs your application. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI.
    Reference: AWS Elastic Beanstalk FAQs

    Top

    Q36: Elastic Beanstalk is used to:

    • A. Deploy and scale web applications and services developed with a supported platform
    • B. Deploy and scale serverless applications
    • C. Deploy and scale applications based purely on EC2 instances
    • D. Manage the deployment of all AWS infrastructure resources of your AWS applications

    Answer: A.
    Who should use AWS Elastic Beanstalk?
    Those who want to deploy and manage their applications within minutes in the AWS Cloud. You don’t need experience with cloud computing to get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications.
    Reference:

    Top

    Q35: How can AWS X-Ray determine what data to collect?

    • A. X-Ray applies a sampling algorithm by default
    • B. X-Ray collects data on all requests by default
    • C. You can implement your own sampling frequencies for data collection
    • D. X-Ray collects data on all requests for services enabled with it

    Answer: A. and C.

    Reference: AWS X-Ray FAQs

    Top

    Q37: Which API call is used to list all resources that belong to a CloudFormation Stack?

    • A. DescribeStacks
    • B. GetTemplate
    • C. DescribeStackResources
    • D. ListStackResources


    Answer: D.

    Reference: ListStackResources

    Top

    Q38: What is the default behaviour of a CloudFormation stack if the creation of one resource fails?

    • A. Rollback
    • B. The stack continues creating and the failed resource is ignored
    • C. Delete
    • D. Undo


    Answer: A. Rollback

    Reference: AWS CloudFormation FAQs

    Top

    Q39: Which AWS CLI command lists all current stacks in your CloudFormation service?

    • A. aws cloudformation describe-stacks
    • B. aws cloudformation list-stacks
    • C. aws cloudformation create-stack
    • D. aws cloudformation describe-stack-resources


    Answer: A. and B.

    Reference: list-stacks

    Top

    Q40:
    Which API call is used to list all resources that belong to a CloudFormation Stack?

    • A. DescribeStacks
    • B. GetTemplate
    • C. ListStackResources
    • D. DescribeStackResources


    Answer: C.

    Reference: list-stack-resources

    Top

    Q41: How does using ElastiCache help to improve database performance?

    • A. It can store petabytes of data
    • B. It provides faster internet speeds
    • C. It can store the results of frequent or highly-taxing queries
    • D. It uses read replicas

    Answer: C.
    With ElastiCache, customers get all of the benefits of a high-performance, in-memory cache with less of the administrative burden involved in launching and managing a distributed cache. The service makes setup, scaling, and cluster failure handling much simpler than in a self-managed cache deployment.
    Reference: Amazon ElastiCache

    Top

    Q42: Which of the following best describes the Lazy Loading caching strategy?

    • A. Every time the underlying database is written to or updated the cache is updated with the new information.
    • B. Every miss to the cache is counted and when a specific number is reached a full copy of the database is migrated to the cache
    • C. A specific amount of time is set before the data in the cache is marked as expired. After expiration, a request for expired data will be made through to the backing database.
    • D. Data is added to the cache when a cache miss occurs (when there is no data in the cache and the request must go to the database for that data)

    Answer: D.
    Amazon ElastiCache is an in-memory key/value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested.
    Reference: Lazy Loading

    Top

    Q43: What are two benefits of using RDS read replicas?

    • A. You can add/remove read replicas based on demand, so it creates elasticity for RDS.
    • B. Improves performance of the primary database by taking workload from it
    • C. Automatic failover in the case of Availability Zone service failures
    • D. Allows both reads and writes

    Answer: A. and B.

    Reference: Amazon RDS Read Replicas

    Top

    Q44: What is the simplest way to enable an S3 bucket to be able to send messages to your SNS topic?

    • A. Attach an IAM role to the S3 bucket to send messages to SNS.
    • B. Activate the S3 pipeline feature to send notifications to another AWS service – in this case select SNS.
    • C. Add a resource-based access control policy on the SNS topic.
    • D. Use AWS Lambda to receive events from the S3 bucket and then use the Publish API action to send them to the SNS topic.

    Top

    Q45: You have just set up a push notification service to send a message to an app installed on a device with the Apple Push Notification Service. It seems to work fine. You now want to send a message to an app installed on devices for multiple platforms, those being the Apple Push Notification Service(APNS) and Google Cloud Messaging for Android (GCM). What do you need to do first for this to be successful?

    • A. Request Credentials from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher
    • B. Create a Platform Application Object which will connect all of the mobile devices with your app to the correct SNS topic.
    • C. Request a Token from Mobile Platforms, so that each device has the correct access control policies to access the SNS publisher.
    • D. Get a set of credentials in order to be able to connect to the push notification service you are trying to setup.

    Answer: D.
    To use Amazon SNS mobile push notifications, you need to establish a connection with a supported push notification service. This connection is established using a set of credentials.
    Reference: Add Device Tokens or Registration IDs

    Top

    Q46: SNS message can be sent to different kinds of endpoints. Which of these is NOT currently a supported endpoint?

    • A. Slack Messages
    • B. SMS (text message)
    • C. HTTP/HTTPS
    • D. AWS Lambda

    Answer: A.
    Slack messages are not directly integrated with SNS, though theoretically, you could write a service to push messages to slack from SNS.
    Reference:

    Top

    Q47: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number empty responses?

    • A. Set the imaging queue VisibilityTimeout attribute to 20 seconds
    • B. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds
    • C. Set the imaging queue ReceiveMessageWaitTimeSeconds Attribute to 20 seconds
    • D. Set the DelaySeconds parameter of a message to 20 seconds

    Answer: C.
    Enabling long polling reduces the amount of false and empty responses from SQS service. It also reduces the number of calls that need to be made to a queue by staying connected to the queue until all messages have been received or until timeout. In order to enable long polling the ReceiveMessageWaitTimeSeconds attribute needs to be set to a number greater than 0. If it is set to 0 then short polling is enabled.
    Reference: Amazon SQS Long Polling

    Top

    Q48: Which of the following statements about SQS standard queues are true?

    • A. Message order can be indeterminate – you’re not guaranteed to get messages in the same order they were sent in
    • B. Messages will be delivered exactly once and messages will be delivered in First in, First out order
    • C. Messages will be delivered exactly once and message delivery order is indeterminate
    • D. Messages can be delivered one or more times

    Answer: A. and D.
    A standard queue makes a best effort to preserve the order of messages, but more than one copy of a message might be delivered out of order. If your system requires that order be preserved, we recommend using a FIFO (First-In-First-Out) queue or adding sequencing information in each message so you can reorder the messages when they’re received.
    Reference: Amazon SQS Standard Queues

    Top

    Q49: Which of the following is true if long polling is enabled?

    • A. If long polling is enabled, then each poll only polls a subset of SQS servers; in order for all messages to be received, polling must continuously occur
    • B. The reader will listen to the queue until timeout
    • C. Increases costs because each request lasts longer
    • D. The reader will listen to the queue until a message is available or until timeout

    Answer: D.

    Reference: Amazon SQS Long Polling

    Top

    Q50: When dealing with session state in EC2-based applications using Elastic load balancers which option is generally thought of as the best practice for managing user sessions?

    • A. Having the ELB distribute traffic to all EC2 instances and then having the instance check a caching solution like ElastiCache running Redis or Memcached for session information
    • B. Permanently assigning users to specific instances and always routing their traffic to those instances
    • C. Using Application-generated cookies to tie a user session to a particular instance for the cookie duration
    • D. Using Elastic Load Balancer generated cookies to tie a user session to a particular instance

    Answer: A.

    Reference: Distributed Session Management

    Top

    Q51: When requested through an STS API call, credentials are returned with what three components?

    • A. Security Token, Access Key ID, Signed URL
    • B. Security Token, Access Key ID, Secret Access Key
    • C. Signed URL, Security Token, Username
    • D. Security Token, Secret Access Key, Personal Pin Code

    Answer: B.
    Security Token, Access Key ID, Secret Access Key
    Reference:

    Top

    Q52: Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week.
    How can you securely provide credentials that allow your application to write to the queue?

    • A. Have the application fetch an access key from an Amazon S3 bucket at run time.
    • B. Launch the application’s Amazon EC2 instance with an IAM role.
    • C. Encrypt an access key in the application source code.
    • D. Enroll the instance in an Active Directory domain and use AD authentication.

    Answer: B.
    IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources.
    Reference: AWS IAM FAQs

    Top

    Q53: Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn’t overwrite a simultaneous change from another process.
    How can you ensure concurrency?

    • A. Implement optimistic concurrency by using a conditional write.
    • B. Implement pessimistic concurrency by using a conditional write.
    • C. Implement optimistic concurrency by locking the item upon read.
    • D. Implement pessimistic concurrency by locking the item upon read.

    Answer: A.
    Optimistic concurrency depends on checking a value upon save to ensure that it has not changed. Pessimistic concurrency prevents a value from changing by locking the item or row in the database. DynamoDB does not support item locking, and conditional writes are perfect for implementing optimistic concurrency.
    Reference: Optimistic Locking With Version Number

    Top

    Q54: Which statements about DynamoDB are true? Choose 2 answers

    • A. DynamoDB uses optimistic concurrency control
    • B. DynamoDB restricts item access during writes
    • C. DynamoDB uses a pessimistic locking model
    • D. DynamoDB restricts item access during reads
    • E. DynamoDB uses conditional writes for consistency


    Top

    Q55: Your CloudFormation template has the following Mappings section:

    Which JSON snippet will result in the value “ami-6411e20d” when a stack is launched in us-east-1?

    • A. { “Fn::FindInMap” : [ “Mappings”, { “RegionMap” : [“us-east-1”, “us-west-1”] }, “32”]}
    • B. { “Fn::FindInMap” : [ “Mappings”, { “Ref” : “AWS::Region” }, “32”]}
    • C. { “Fn::FindInMap” : [ “RegionMap”, { “Ref” : “AWS::Region” }, “32”]}
    • D. { “Fn::FindInMap” : [ “RegionMap”, { “RegionMap” : “AWS::Region” }, “32”]}

    Answer: C.
    The intrinsic function Fn::FindInMap returns the value corresponding to keys in a two-level map that is declared in the Mappings section.
    You can use the Fn::FindInMap function to return a named value based on a specified key. The following example template contains an Amazon EC2 resource whose ImageId property is assigned by the FindInMap function. The FindInMap function specifies key as the region where the stack is created (using the AWS::Region pseudo parameter) and HVM64 as the name of the value to map to.
    Reference:

    Top

    Q56: Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.

    What is an appropriate way to code this?

    • A. Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.
    • B. Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.
    • C. Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.
    • D. Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.

    Answer: D.
    There are two challenges here: the command must be “fanned out” to a variable pool of partners, and your app must be decoupled from the partners because they are not highly available.
    Sending the command as an SNS message achieves the fan-out via its publication/subscribe model, and using an SQS queue for each partner decouples your app from the partners. Writing the message to each queue directly would cause more latency for your app and would require your app to monitor which partners were active. It would be difficult to write an Amazon SWF workflow for a rapidly changing set of partners.

    Reference: AWS SNS Faqs

    Top

    Q57: You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets.
    When you call the database tier from your app tier instances, you receive a timeout error. What could be causing this?

    • A. The IAM role associated with the app tier instances does not have rights to the MySQL database.
    • B. The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app
      instances.
    • C. The Amazon RDS database instance does not have a public IP address.
    • D. There is no route defined between the app tier and the database tier in the Amazon VPC.

    Answer: B.
    Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.

    Reference: Security Groups for Your VPC

    Top

    Q58: What type of block cipher does Amazon S3 offer for server side encryption?

    • A. RC5
    • B. Blowfish
    • C. Triple DES
    • D. Advanced Encryption Standard

    Answer: D
    Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

    Reference: Protecting Data Using Server-Side Encryption

    Top

    Q59: You have written an application that uses the Elastic Load Balancing service to spread
    traffic to several web servers Your users complain that they are sometimes forced to login
    again in the middle of using your application, after they have already togged in. This is not
    behaviour you have designed. What is a possible solution to prevent this happening?

    • A. Use instance memory to save session state.
    • B. Use instance storage to save session state.
    • C. Use EBS to save session state
    • D. Use ElastiCache to save session state.
    • E. Use Glacier to save session slate.

    Answer: D.
    You can cache a variety of objects using the service, from the content in persistent data stores (such as Amazon RDS, DynamoDB, or self-managed databases hosted on EC2) to dynamically generated web pages (with Nginx for example), or transient session data that may not require a persistent backing store. You can also use it to implement high-frequency counters to deploy admission control in high volume web applications.

    Reference: Amazon ElastiCache FAQs

    Top

    Q60: You are writing to a DynamoDB table and receive the following exception:”
    ProvisionedThroughputExceededException”. though according to your Cloudwatch metrics
    for the table, you are not exceeding your provisioned throughput. What could be an
    explanation for this?

    • A. You haven’t provisioned enough DynamoDB storage instances
    • B. You’re exceeding your capacity on a particular Range Key
    • C. You’re exceeding your capacity on a particular Hash Key
    • D. You’re exceeding your capacity on a particular Sort Key
    • E. You haven’t configured DynamoDB Auto Scaling triggers

    Answer: C.
    The primary key that uniquely identifies each item in a DynamoDB table can be simple (a partition key only) or composite (a partition key combined with a sort key).
    Generally speaking, you should design your application for uniform activity across all logical partition keys in the Table and its secondary indexes.
    You can determine the access patterns that your application requires, and estimate the total read capacity units and write capacity units that each table and secondary Index requires.

    As traffic starts to flow, DynamoDB automatically supports your access patterns using the throughput you have provisioned, as long as the traffic against a given partition key does not exceed 3000 read capacity units or 1000 write capacity units.

    Reference: Best Practices for Designing and Using Partition Keys Effectively

    Top

    Q61: Which DynamoDB limits can be raised by contacting AWS support?

    • A. The number of hash keys per account
    • B. The maximum storage used per account
    • C. The number of tables per account
    • D. The number of local secondary indexes per account
    • E. The number of provisioned throughput units per account


    Answer: C. and E.

    For any AWS account, there is an initial limit of 256 tables per region.
    AWS places some default limits on the throughput you can provision.
    These are the limits unless you request a higher amount.
    To request a service limit increase see https://aws.amazon.com/support.Reference: Limits in DynamoDB


    Top

    Q62: AWS CodeBuild allows you to compile your source code, run unit tests, and produce deployment artifacts by:

    • A. Allowing you to provide an Amazon Machine Image to take these actions within
    • B. Allowing you to select an Amazon Machine Image and provide a User Data bootstrapping script to prepare an instance to take these actions within
    • C. Allowing you to provide a container image to take these actions within
    • D. Allowing you to select from pre-configured environments to take these actions within

    Answer: C. and D.
    You can provide your own custom container image to build your deployment artifacts.
    You never actually pass a specific AMI to CodeBuild. Though you can provide a custom docker image which you could basically ‘bootstrap’ for the purposes of your build.
    Reference: AWS CodeBuild Faqs

    Top

    Q63: Which of the following will not cause a CloudFormation stack deployment to rollback?

    • A. The template contains invalid JSON syntax
    • B. An AMI specified in the template exists in a different region than the one in which the stack is being deployed.
    • C. A subnet specified in the template does not exist
    • D. The template specifies an instance-store backed AMI and an incompatible EC2 instance type.

    Answer: A.
    Invalid JSON syntax will cause an error message during template validation. Until the syntax is fixed, the template will not be able to deploy resources, so there will not be a need to or opportunity to rollback.
    Reference: AWS CloudFormatio Faqs

    Top

    Q64: Your team is using CodeDeploy to deploy an application which uses secure parameters that are stored in the AWS System Mangers Parameter Store. What two options below must be completed so CodeDeploy can deploy the application?

    • A. Use ssm get-parameters with –with-decryption option
    • B. Add permissions using AWS access keys
    • C. Add permissions using AWS IAM role
    • D. Use ssm get-parameters with –with-no-decryption option

    Answer: A. and C.

    Reference: Add permission using IAM role


    Top

    Q65: A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers How would you authenticate to the application given these details? (Choose 2)

    • A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
    • B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
    • C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
    • D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.

    Answer: A. and B.
    The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker.
    Reference: IAM faqs

    Top

    Q66:
    A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Which of the solutions below meet these requirements? Choose two answers
    How would you authenticate to the application given these details? (Choose 2)

    • A. The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the S3 keyspace.
    • B. Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 keyspace
    • C. Develop an identity broker which authenticates against IAM Security Token Service to assume an IAM Role to get temporary AWS security credentials. The application calls the identity broker to get AWS temporary security credentials with access to the app
    • D. The application authenticates against LDAP. The application then calls the IAM Security Service to login to IAM using the LDAP credentials. The application can use the IAM temporary credentials to access the appropriate S3 bucket.

    Answer: A. and B.
    The question clearly says “authenticate against LDAP”. Temporary credentials come from STS. Federated user credentials come from the identity broker.
    Reference: AWA STS Faqs

    Top

    Q67: When users are signing in to your application using Cognito, what do you need to do to make sure if the user has compromised credentials, they must enter a new password?

    • A. Create a user pool in Cognito
    • B. Block use for “Compromised credential” in the Basic security section
    • C. Block use for “Compromised credential” in the Advanced security section
    • D. Use secure remote password

    Answer: A. and C.
    Amazon Cognito can detect if a user’s credentials (user name and password) have been compromised elsewhere. This can happen when users reuse credentials at more than one site, or when they use passwords that are easy to guess.

    From the Advanced security page in the Amazon Cognito console, you can choose whether to allow, or block the user if compromised credentials are detected. Blocking requires users to choose another password. Choosing Allow publishes all attempted uses of compromised credentials to Amazon CloudWatch. For more information, see Viewing Advanced Security Metrics.

    You can also choose whether Amazon Cognito checks for compromised credentials during sign-in, sign-up, and password changes.

    Note Currently, Amazon Cognito doesn’t check for compromised credentials for sign-in operations with Secure Remote Password (SRP) flow, which doesn’t send the password during sign-in. Sign-ins that use the AdminInitiateAuth API with ADMIN_NO_SRP_AUTH flow and the InitiateAuth API with USER_PASSWORD_AUTH flow are checked for compromised credentials.

    Reference: AWS Cognito

    Top

    Q68: You work in a large enterprise that is currently evaluating options to migrate your 27 GB Subversion code base. Which of the following options is the best choice for your organization?

    • A. AWS CodeHost
    • B. AWS CodeCommit
    • C. AWS CodeStart
    • D. None of these

    Answer: D.
    None of these. While CodeCommit is a good option for git reponsitories it is not able to host Subversion source control.

    Reference: Migration to CodeCommit

    Top

    Q69: You are on a development team and you need to migrate your Spring Application over to AWS. Your team is looking to build, modify, and test new versions of the application. What AWS services could help you migrate your app?

    • A. Elastic Beanstalk
    • B. SQS
    • C. Ec2
    • D. AWS CodeDeploy

    Answer: A. C. and D.
    Amazon EC2 can be used to deploy various applications to your AWS Infrastructure.
    AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions.

    Reference: AWS Deployment Faqs

    Top

    Q70:
    You are a developer responsible for managing a high volume API running in your company’s datacenter. You have been asked to implement a similar API, but one that has potentially higher volume. And you must do it in the most cost effective way, using as few services and components as possible. The API stores and fetches data from a key value store. Which services could you utilize in AWS?

    • A. DynamoDB
    • B. Lambda
    • C. API Gateway
    • D. EC2

    Answer: A. and C.
    NoSQL databases like DynamoDB are designed for key value usage. DynamoDB can also handle incredible volumes and is cost effective. AWS API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure APIs.

    Reference: API Gateway Faqs

    Top

    Q71: By default, what event occurs if your CloudFormation receives an error during creation?

    • A. DELETE_IN_PROGRESS
    • B. CREATION_IN_PROGRESS
    • C. DELETE_COMPLETE
    • D. ROLLBACK_IN_PROGRESS

    Answer: D.

    Reference: Check Status Code


    Top

    Q72:
    AWS X-Ray was recently implemented inside of a service that you work on. Several weeks later, after a new marketing push, that service started seeing a large spike in traffic and you’ve been tasked with investigating a few issues that have started coming up but when you review the X-Ray data you can’t find enough information to draw conclusions so you decide to:

    • A. Start passing in the X-Amzn-Trace-Id: True HTTP header from your upstream requests
    • B. Refactor the service to include additional calls to the X-Ray API using an AWS SDK
    • C. Update the sampling algorithm to increase the sample rate and instrument X-Ray to collect more pertinent information
    • D. Update your application to use the custom API Gateway TRACE method to send in data

    Answer: C.
    This is a good way to solve the problem – by customizing the sampling so that you can get more relevant information.

    Reference: AWS X-Ray facts

    Top

    Q74: X-Ray metadata:

    • A. Associates request data with a particular Trace-ID
    • B. Stores key-value pairs of any type that are not searchable
    • C. Collects at the service layer to provide information on the overall health of the system
    • D. Stores key-value pairs of searchable information

    Answer:AB.
    X-Ray metadata stores key-value pairs of any type that are not searchable.
    Reference: AWS X-Rays faqs

    Top

    Q75: Which of the following is the right sequence that gets called in CodeDeploy when you use Lambda hooks in an EC2/On-Premise Deployment?

    • A. Before Install-AfterInstall-Validate Service-Application Start
    • B. Before Install-After-Install-Application Stop-Application Start
    • C. Before Install-Application Stop-Validate Service-Application Start
    • D. Application Stop-Before Install-After Install-Application Start

    Answer: D.
    In an in-place deployment, including the rollback of an in-place deployment, event hooks are run in the following order:

    Note An AWS Lambda hook is one Lambda function specified with a string on a new line after the name of the lifecycle event. Each hook is executed once per deployment. Following are descriptions of the lifecycle events where you can run a hook during an Amazon ECS deployment.

    BeforeInstall – Use to run tasks before the replacement task set is created. One target group is associated with the original task set. If an optional test listener is specified, it is associated with the original task set. A rollback is not possible at this point. AfterInstall – Use to run tasks after the replacement task set is created and one of the target groups is associated with it. If an optional test listener is specified, it is associated with the original task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTestTraffic – Use to run tasks after the test listener serves traffic to the replacement task set. The results of a hook function at this point can trigger a rollback. BeforeAllowTraffic – Use to run tasks after the second target group is associated with the replacement task set, but before traffic is shifted to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. AfterAllowTraffic – Use to run tasks after the second target group serves traffic to the replacement task set. The results of a hook function at this lifecycle event can trigger a rollback. Run Order of Hooks in an Amazon ECS Deployment

    In an Amazon ECS deployment, event hooks run in the following order:

    For in-place deployments, the six hooks related to blocking and allowing traffic apply only if you specify a Classic Load Balancer, Application Load Balancer, or Network Load Balancer from Elastic Load Balancing in the deployment group.Note The Start, DownloadBundle, Install, and End events in the deployment cannot be scripted, which is why they appear in gray in this diagram. However, you can edit the ‘files’ section of the AppSpec file to specify what’s installed during the Install event.

    Reference: Appspec.yml specs

    Top

    Q76:
    Describe the process of registering a mobile device with SNS push notification service using GCM.

    • A. Receive Registration ID and token for each mobile device. Then, register the mobile application with Amazon SNS, and pass the GCM token credentials to Amazon SNS
    • B. Pass device token to SNS to create mobile subscription endpoint for each mobile device, then request the device token from each mobile device. SNS then communicates on your behalf to the GCM service
    • C. None of these are correct
    • D. Submit GCM notification credentials to Amazon SNS, then receive the Registration ID for each mobile device. After that, pass the device token to SNS, and SNS then creates a mobile subscription endpoint for each device and communicates with the GCM service on your behalf

    Answer: D.
    When you first register an app and mobile device with a notification service, such as Apple Push Notification Service (APNS) and Google Cloud Messaging for Android (GCM), device tokens or registration IDs are returned from the notification service. When you add the device tokens or registration IDs to Amazon SNS, they are used with the PlatformApplicationArn API to create an endpoint for the app and device. When Amazon SNS creates the endpoint, an EndpointArn is returned. The EndpointArn is how Amazon SNS knows which app and mobile device to send the notification message to.

    Reference: AWS Mobile Push Send device token

    Top

    Q77:
    You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?

    • A. Store photos on an EBS volume of the web server.
    • B. Block the IPs of the offending websites in Security Groups.
    • C. Remove public read access and use signed URLs with expiry dates.
    • D. Use CloudFront distributions for static content.

    Answer: C.
    This solves the issue, but does require you to modify your website. Your website already uses S3, so it doesn’t require a lot of changes. See the docs for details: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

    Reference: AWS S3 shared objects presigned urls

    CloudFront on its own doesn’t prevent unauthorized access and requires you to add a whole new layer to your stack (which may make sense anyway). You can serve private content, but you’d have to use signed URLs or similar mechanism. Here are the docs: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

    Top

    Q78: How can you control access to the API Gateway in your environment?

    • A. Cognito User Pools
    • B. Lambda Authorizers
    • C. API Methods
    • D. API Stages

    Answer: A. and B.
    Access to a REST API Using Amazon Cognito User Pools as Authorizer
    As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway.

    To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request’s Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn’t authorized to make the call because the client did not have credentials that could be authorized.

    The identity token is used to authorize API calls based on identity claims of the signed-in user. The access token is used to authorize API calls based on the custom scopes of specified access-protected resources. For more information, see Using Tokens with User Pools and Resource Server and Custom Scopes.

    Reference: AWS API Gateway integrate with Cognito

    Top

    Q79: What kind of message does SNS send to endpoints?

    • A. An XML document with parameters like Message, Source, Destination, Type
    • B. A JSON document with parameters like Message, Signature, Subject, Type.
    • C. An XML document with parameters like Message, Signature, Subject, Type
    • D. A JSON document with parameters like Message, Source, Destination, Type

    Answer: B.
    Amazon SNS messages do not publish the source/destination

    Reference: AWS SNS Faqs

    Top

    Q80: Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?

    • A. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds.
    • B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds.
    • C. Set the imaging queue VisibilityTimeout attribute to 20 seconds.
    • D. Set the DelaySeconds parameter of a message to 20 seconds.

    Answer: B.
    ReceiveMessageWaitTimeSeconds, when set to greater than zero, enables long polling. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response. Short polling continuously pools a queue and can have false positives. Enabling long polling reduces the number of poll requests, false positives, and empty responses.
    Reference: AWS SQS Long Polling

    Top

    81: You’re using CloudFormation templates to build out staging environments. What section of the CloudFormation would you edit in order to allow the user to specify the PEM key-name at start time?

    • A. Resources Section
    • B. Parameters Section
    • C. Mappings Section
    • D. Declaration Section


    Answer:B.

    Parameters property type in CloudFormation allows you to accept user input when starting the CloudFormation template. It allows you to reference the user input as variable throughout your CloudFormation template. Other examples might include asking the user starting the template to provide Domain admin passwords, instance size, pem key, region, and other dynamic options.

    Reference: AWS CloudFormation Parameters


    Top

    Q82: You are writing an AWS CloudFormation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS CloudFormation template?

    • A. You can use intrinsic functions in any part of a template, except AWSTemplateFormatVersion and Description
    • B. You can use intrinsic functions in any part of a template.
    • C. You can use intrinsic functions only in the resource properties part of a template.
    • D. You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties, metadata attributes, and update policy attributes.


    Answer: D.

    You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources.

  1. Reference: AWS Intrinsic Functions

Top




Other AWS Facts and Summaries and Questions/Answers Dump

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Certification Exam Prep: S3 Facts, Summaries, Questions and Answers

AWS S3 Facts and summaries, AWS S3 Top 10 Questions and Answers Dump

Definition 1: Amazon S3 or Amazon Simple Storage Service is a “simple storage service” offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.

Definition 2: Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

AWS S3 Explained graphically:

Amazon S3 Explained in pictures
Amazon S3 Explained


Amazon S3 Explained in pictures
Amazon S3 Explained in pictures

Amazon S3 Explained graphically
Amazon S3 Explained graphically


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

AWS S3 Facts and summaries

  1. S3 is a universal namespace, meaning each S3 bucket you create must have a unique name that is not being used by anyone else in the world.
  2. S3 is object based: i.e allows you to upload files.
  3. Files can be from 0 Bytes to 5 TB
  4. What is the maximum length, in bytes, of a DynamoDB range primary key attribute value?
    The maximum length of a DynamoDB range primary key attribute value is 2048 bytes (NOT 256 bytes).
  5. S3 has unlimited storage.
  6. Files are stored in Buckets.
  7. Read after write consistency for PUTS of new Objects
  8. Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)
  9. S3 Storage Classes/Tiers:
    • S3 Standard (durable, immediately available, frequently accesses)
    • Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering): It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
    • S3 Standard-Infrequent Access – S3 Standard-IA (durable, immediately available, infrequently accessed)
    • S3 – One Zone-Infrequent Access – S3 One Zone IA: Same ad IA. However, data is stored in a single Availability Zone only
    • S3 – Reduced Redundancy Storage (data that is easily reproducible, such as thumbnails, etc.)
    • Glacier – Archived data, where you can wait 3-5 hours before accessing

    You can have a bucket that has different objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.

  10. The default URL for S3 hosted websites lists the bucket name first followed by s3-website-region.amazonaws.com . Example: enoumen.com.s3-website-us-east-1.amazonaws.com
  11. Core fundamentals of an S3 object
    • Key (name)
    • Value (data)
    • Version (ID)
    • Metadata
    • Sub-resources (used to manage bucket-specific configuration)
      • Bucket Policies, ACLs,
      • CORS
      • Transfer Acceleration
  12. Object-based storage only for files
  13. Not suitable to install OS on.
  14. Successful uploads will generate a HTTP 200 status code.
  15. S3 Security – Summary
    • By default, all newly created buckets are PRIVATE.
    • You can set up access control to your buckets using:
      • Bucket Policies – Applied at the bucket level
      • Access Control Lists – Applied at an object level.
    • S3 buckets can be configured to create access logs, which log all requests made to the S3 bucket. These logs can be written to another bucket.
  16. S3 Encryption
    • Encryption In-Transit (SSL/TLS)
    • Encryption At Rest:
      • Server side Encryption (SSE-S3, SSE-KMS, SSE-C)
      • Client Side Encryption
    • Remember that we can use a Bucket policy to prevent unencrypted files from being uploaded by creating a policy which only allows requests which include the x-amz-server-side-encryption parameter in the request header.
  17. S3 CORS (Cross Origin Resource Sharing):
    CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

    • Used to enable cross origin access for your AWS resources, e.g. S3 hosted website accessing javascript or image files located in another bucket. By default, resources in one bucket cannot access resources located in another. To allow this we need to configure CORS on the bucket being accessed and enable access for the origin (bucket) attempting to access.
    • Always use the S3 website URL, not the regular bucket URL. E.g.: https://s3-eu-west-2.amazonaws.com/acloudguru
  18. S3 CloudFront:
    • Edge locations are not just READ only – you can WRITE to them too (i.e put an object on to them.)
    • Objects are cached for the life of the TTL (Time to Live)
    • You can clear cached objects, but you will be charged. (Invalidation)
  19. S3 Performance optimization – 2 main approaches to Performance Optimization for S3:
    • GET-Intensive Workloads – Use Cloudfront
    • Mixed Workload – Avoid sequencial key names for your S3 objects. Instead, add a random prefix like a hex hash to the key name to prevent multiple objects from being stored on the same partition.
      • mybucket/7eh4-2019-03-04-15-00-00/cust1234234/photo1.jpg
      • mybucket/h35d-2019-03-04-15-00-00/cust1234234/photo2.jpg
      • mybucket/o3n6-2019-03-04-15-00-00/cust1234234/photo3.jpg
  20. The best way to handle large objects uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts.
  21. You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.
  22. Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.
  23. What is the maximum number of S3 buckets allowed per AWS account (by default)? 100
  24. You successfully upload an item to the us-east-1 region. You then immediately make another API call and attempt to read the object. What will happen?
    All AWS regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model (where you will sometimes get stale results if you have recently made changes)
  25. S3 bucket policies require a Principal be defined. Review the access policy elements here
  26. What checksums does Amazon S3 employ to detect data corruption?

    Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Top
Reference: AWS S3

AWS S3 Top 10 Questions and Answers Dump

Q0: You’ve written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 – 500 MB. You’ve seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider?

  • A. Create multiple threads and upload the objects in the multiple threads
  • B. Write the items in batches for better performance
  • C. Use the Multipart upload API
  • D. Enable versioning on the Bucket


C. All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API. The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Top

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Q2: You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed application from Amazon S3 buckets?

  • A. AWS::Serverless::Api
  • B. AWS::Serverless::Application
  • C. AWS::Serverless::Layerversion
  • D. AWS::Serverless::Function


Answer – B
AWS::Serverless::Application resource in AWS SAm template is used to embed application frm Amazon S3 buckets.
Reference: Declaring Serverless Resources

Top

Q3: A static web site has been hosted on a bucket and is now being accessed by users. One of the web pages javascript section has been changed to access data which is hosted in another S3 bucket. Now that same web page is no longer loading in the browser. Which of the following can help alleviate the error?

  • A. Enable versioning for the underlying S3 bucket.
  • B. Enable Replication so that the objects get replicated to the other bucket
  • C. Enable CORS for the bucket
  • D. Change the Bucket policy for the bucket to allow access from the other bucket


Answer – C

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

Cross-Origin Resource Sharing: Use-case Scenarios The following are example scenarios for using CORS:

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can congure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preight check) for loading web fonts. You would congure the bucket that is hosting the web font to allow any origin to make these requests.

Reference: Cross-Origin Resource Sharing (CORS)


Top

Q4: Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below

  • A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content.
  • B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code.
  • C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects.
  • D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user’s objects to that bucket.


Answer- C
The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3.
You can then provides access to the objects based on the key values generated via the user id.

Reference: The AWS Security Token Service (STS)


Top

Q5: Both ACLs and Bucket Policies can be used to grant access to S3 buckets. Which of the following statements is true about ACLs and Bucket policies?

  • A. Bucket Policies are Written in JSON and ACLs are written in XML
  • B. ACLs can be attached to S3 objects or S3 Buckets
  • C. Bucket Policies and ACLs are written in JSON
  • D. Bucket policies are only attached to s3 buckets, ACLs are only attached to s3 objects

Answer: A. and B.
Only Bucket Policies are written in JSON, ACLs are written in XML.
While Bucket policies are indeed only attached to S3 buckets, ACLs can be attached to S3 Buckets OR S3 Objects.
Reference:

Top

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Q6: What are good options to improve S3 performance when you have significantly high numbers of GET requests?

  • A. Introduce random prefixes to S3 objects
  • B. Introduce random suffixes to S3 objects
  • C. Setup CloudFront for S3 objects
  • D. Migrate commonly used objects to Amazon Glacier

Answer: C
CloudFront caching is an excellent way to avoid putting extra strain on the S3 service and to improve the response times of reqeusts by caching data closer to users at CloudFront locations.
S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.
Reference: Amazon S3 Transfer Acceleration

Top

Q7: If an application is storing hourly log files from thousands of instances from a high traffic
web site, which naming scheme would give optimal performance on S3?

  • A. Sequential
  • B. HH-DD-MM-YYYY-log_instanceID
  • C. YYYY-MM-DD-HH-log_instanceID
  • D. instanceID_log-HH-DD-MM-YYYY
  • E. instanceID_log-YYYY-MM-DD-HH


Answer: A. B. C. D. and E.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.

Reference: Amazon S3 Announces Increased Request Rate Performance


Top

Q8: You are working with the S3 API and receive an error message: 409 Conflict. What is the possible cause of this error

  • A. You’re attempting to remove a bucket without emptying the contents of the bucket first.
  • B. You’re attempting to upload an object to the bucket that is greater than 5TB in size.
  • C. Your request does not contain the proper metadata.
  • D. Amazon S3 is having internal issues.

Answer:A.

Reference: S3 Error codes

Top

Q9: You created three S3 buckets – “mywebsite.com”, “downloads.mywebsite.com”, and “www.mywebsite.com”. You uploaded your files and enabled static website hosting. You specified both of the default documents under the “enable static website hosting” header. You also set the “Make Public” permission for the objects in each of the three buckets. You create the Route 53 Aliases for the three buckets. You are going to have your end users test your websites by browsing to http://mydomain.com/error.html, http://downloads.mydomain.com/index.html, and http://www.mydomain.com. What problems will your testers encounter?

  • A. http://mydomain.com/error.html will not work because you did not set a value for the error.html file
  • B. There will be no problems, all three sites should work.
  • C. http://www.mywebsite.com will not work because the URL does not include a file name at the end of it.
  • D. http://downloads.mywebsite.com/index.html will not work because the “downloads” prefix is not a supported prefix for S3 websites using Route 53 aliases
Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Answer: B.
It used to be that the only allowed domain prefix when creating Route 53 Aliases for S3 static websites was the “www” prefix. However, this is no longer the case. You can now use other subdomain.

Reference: Hosting a Static Website on Amazon S3

Top

Q10: Which of the following is NOT a common S3 API call?

  • A. UploadPart
  • B. ReadObject
  • C. PutObject
  • D. DownloadBucket

Top

Other AWS Facts and Summaries

Latest DevOps and SysAdmin Feed

Djamgatech: AI Driven Certification Preparation App for iOS, Android, Windows

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Latest DevOps and SysAdmin Feed

DevOps is a set of practices and tools that organizations use to accelerate software development and improve the quality of their software products. It aims to bring development and operations teams together, so they can work more collaboratively and efficiently to deliver software faster and with fewer errors.

The goal of DevOps is to automate as much of the software delivery process as possible, using tools such as continuous integration, continuous delivery, and infrastructure as code. This allows teams to move faster and release new features and bug fixes more frequently, while also reducing the risk of errors and downtime.

DevOps also emphasizes the importance of monitoring, logging, and testing to ensure that software is performing well in production. By continuously monitoring and analyzing performance data, teams can quickly identify and resolve any issues that arise.

In summary, DevOps is a combination of people, processes, and technology that organizations use to improve their software delivery capabilities, increase efficiency, and reduce risk.

What is DevOps in Simple English?

What is a System Administrator?


DevOps: In IT world, DevOps means Development Operations. The DevOps is the bridge between the developers, the servers and the infrastructure and his main role is to automate the process of delivering code to operations.
DevOps on wikipedia: is a software development process that emphasizes communication and collaboration between product management, software development, and operations professionals. DevOps also automates the process of software integration, testing, deployment and infrastructure changes.[1][2] It aims to establish a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.

DevOps Latest Feeds


DevOps Resources

  1. What is DevOps? Tackling some frequently asked questions
  2. Find Remote DevOps Jobs here.
Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

error: Content is protected !!