DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts

Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).

Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:


What is a Canary Deployment?

A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.

Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.

DevOps Interviews Question and Answers and Scripts
AWS Developer Associate DVA-C01 Exam Prep
Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep #Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure ios: android: windows 10/11: web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (

What is a Blue Green Deployment?

Reference: Blue Green Deployment

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.

How to a  software release?

There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker

How to automate the whole build and release process?

• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
The above process will be done by Jenkins by creating the jobs.

Did you ever participated in Prod Deployments? If yes what is the procedure?

• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system

Devops Tools and Concepts

What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?

By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.

We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.

The key aspects or principles behind DevOps are:

  • Infrastructure as a Code
  • Continuous Integration
  • Continuous Deployment
  • Automation
  • Continuous Monitoring
  • Security

Popular tools for DevOps are:

  • Git
  • AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
  • Jenkins
  • Ansible
  • Puppet
  • Nagios
  • Docker
  • ELK (Elasticsearch, Logstash, Kibana)

Can we consider DevOps as Agile methodology?

Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.

What are some of the most popular DevOps tools?

What is the job Of HTTP REST API in DevOps?

As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.

What is Scrum?

Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team

What are Micro services, and how they control proficient DevOps rehearses?

Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.

What is Continuous Delivery?

Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.

Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.

Devops Continuous Integration vs Continuous delivery

Why Automate?

Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

What is Puppet?

Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.

What is Configuration Management?

Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.

Software Configuration Management Features are:

• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes

What are the Some Of the Most Popular Devops Tools ?

• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible

What Are the Vagrant And Its Uses?

Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.

What’s a PTR in DNS?

Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.

What testing is necessary to insure a new service is ready for production?

Continuous testing

What is Continuous Testing?

It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.

What are the key elements of continuous testing?

Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.

How does HTTP work?

The HTTP protocol  works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that  respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.

What is IaC? How you will achieve this?

Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.

Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.

What are patterns and anti-patterns of software delivery and deployment?

What are patterns and anti-patterns of software delivery and deployment?

What are Microservices?

Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.

  • Small
  • Decoupled
  • Owned by self-contained teams

Version Control

What is a version control system?

Version Control System (VCS) is a software that helps software developers to work together and maintain
  complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN

What is Source Control?

An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.

Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can

• Collaborate on code with your team.

• Isolate your work until it is ready.

. Quickly troubleshoot issues by identifying who made changes and what the changes were.

Source code management systems help streamline the development process and provide a centralized source for all your code.

What is Git and explain the difference between Git and SVN?

Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.

Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Git contains the local repo as well as the full history of the whole project on all the developers hard drive, so if there is a server outage , you can easily do recovery from your team mates local git repo. SVN relies only on the central server to store all the versions of the project file
Push and pull operations are fast Push and pull operations are slower compared to Git
It belongs to 3rd generation Version Control Tool It belongs to 2nd generation Version Control tools
Client nodes can share the entire repositories on their local system Version history is stored on server-side repository
Commits can be done offline too Commits can be done only online
Work are shared automatically by commit Nothing is shared automatically

Describe branching strategies?

Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.

Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.

Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.

What are Pull requests?

Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements.
Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.


What is the default file permissions for the file and how can I modify it?

Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666

What is a  kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.

What is difference between grep -i and grep -v?

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

i ignore alphabet difference v accept this value
Example:  ls | grep -i docker
ls | grep -v docker
You can’t see anything with name docker.tar.gz

How can you define particular space to the file?

This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1

What is concept of sudo in linux?

Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.

What are the checks to be done when a Linux build server become suddenly slow?

Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time


What is Jenkins?

Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team

What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool

What is continuous integration?

When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

Devops: Continuous Integration

Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.

The microservices architecture is a design approach to build a single application as a set of small services.

What are the advantages of Jenkins?

• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.

Which SCM tools does Jenkins supports?

Source code management tools supported by Jenkins are below:
• AccuRev
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs

In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.

How can I integrate all the tools with Jenkins?

I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.

How to install Jenkins via Docker?

The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:

• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard 
• Accessing the Jenkins/Blue Ocean Docker container:

docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:

docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash

Bash – Shell scripting

Write a shell script to add two numbers

echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”

How to get a file that consists of last 10 lines of the some other file?

Tail -10 filename >filename

How to check the exit status of the commands?

echo $?

How to get the information from file which consists of the word “GangBoard”?

grep “GangBoard” filename

How to search the files with the name of “GangBoard”?

find / -type f -name “*GangBoard*”

Write a shell script to print only prime numbers?

DevOps script to print prime numbers

How to pass the parameters to the script and how can I get those parameters? parameter1 parameter2
Use  $* to get the parameters.

Monitoring – Refactoring

My application is not coming up for some reason? How can you bring it up?

We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.

What is multifactor authentication? What is the use of it?

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.

• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform

I want to copy the artifacts from one location to another location in cloud. How?

Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.

How to  delete 10 days older log files?

find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null


What are the Advantages of Ansible?

• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural

What’s the use of Ansible?

Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.

What are the Pros and Cons of Ansible?

1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage

What is the difference among chef, puppet and ansible?

Ansible Supports Windows but server should be Linux/Unix YAML (Python) Single Active Node
Chef Puppet
Interoperability Works Only on Linux/Unix Works Only on Linux/Unix
Configuration Laguage Uses Ruby Pupper DSL
Availability Primary Server and Backup Server Multi Master Architecture

How to access variable names in Ansible?

Using hostvars method we can access and add the variables like below

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}


What is Docker?

Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.

What is Docker image?

Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.

What is a Docker Container?

Docker Container is the running instance of Docker Image

How to stop and restart the Docker container?

To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+

• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace

Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows

What are the tools used for docker networking?

For docker networking we generally use kubernets and docker swarm.

What is docker compose?

Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.

How to deploy docker container to aws?

Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.

What is the fundamental disservice of Docker holders?

As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.

What are the docker motor and docker form?

Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP

What are the Different modes does a holder can be run?

Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.

What the yield of docker assess order will be?

Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type

What is docker swarm?

Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.

What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?

Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.

How to Version control Docker pictures?

Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.

What is difference between docker image and docker container?

Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.

What is Application Containerization?

It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.

What is the syntax for building docker image?

docker build –f -t imagename:version

What is the running docker image?

docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version

How to log into a container?

docker exec –it /bin/bash


What does the commit object contain?

Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).

Explain the difference between git pull and git fetch?

Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge

How do we know in Git if a branch has already been merged into master?

git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged

What is ‘Staging Area’ or ‘Index’ in GIT?

Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add

What is Git Stash?

Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.

What is Git stash drop?

Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log

What is the function of ‘git config’?

Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global “Your Name”: This command will add your username.
git config –global “Your E-mail Address”: This command will add your email id.

How can you create a repository in Git?

To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.

What language is used in Git?

Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.

What is SubGit?

SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.

How can you clone a Git repository via Jenkins?

First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.

What are the advantages of using Git?

1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.

What is git add?

It adds the file changes to the staging area

What is git commit? 

Commits the changes to the HEAD (staging area)

What is git push?

Sends the changes to the remote repository

What is git checkout?

Switch branch or restore working files

What is git branch?

Creates a branch

What is git fetch?

Fetch the latest history from the remote server and updates the local repo

What is git merge?

Joins two or more branches together

What is git pull?

Fetch from and integrate with another repository or a local branch (git fetch + git merge

What is git rebase?

Process of moving or combining a sequence of commits to a new base commit

What is git revert?

To revert a commit that has already been published and made public

What is git clone?

Clones the git repository and creates a working copy in the local machine

How can I modify the commit message in git?

I have to use following command and enter the required message.
Git commit –amend

How you handle the merge conflicts in git

Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.

What is Git command to send the modifications to the master branch of your remote repository

Use the command “git push origin master”


What are the benefits of NoSQL database on RDBMS?

1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected


What is Maven?

Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories


What is Numpy

There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc. 

Why is python numpy better than lists?

Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.

Describe the map function in Python?

The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.

How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy


import numpy as np

will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1
python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1

How to count the occurrence of each value in a numpy array?

Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.

Ouput: [4 1 1 1 2 3 0 0 0 2]

Does Numpy Support Nan?

nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax

What does ravel() function in numpy do? 

It combines multiple numpy arrays into a single array

How to remove from one array those items that exist in another? 

>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])

How to reverse a numpy array in the most efficient way?

>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]

How to calculate percentiles when using numpy?

>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)

Output:  37.400000000000006

What Is The Difference Between Numpy And Scipy?

NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.

What Is The Preferred Way To Check For An Empty (zero Element) Array?

For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size

What Is The Difference Between Matrices And Arrays?

Matrices can only be two-dimensional, whereas arrays can have any number of

How can you find the indices of an array where a condition is true?

Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)

How to find the maximum and minimum value of a given flattened array?

>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)

Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis. 

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)

Find median of a numpy flattened array

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)

Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis

>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x

Calculate covariance matrix between two numpy arrays

>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)

Compute  product-moment correlation coefficients of two given numpy arrays

>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)

Develop a numpy program to compute the histogram of nums against the bins

>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)

Get the powers of an array values element-wise

>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)

Write a NumPy program to get true division of the element-wise array inputs

>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)


What is a series in pandas?

A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.

What features make Pandas such a reliable option to store tabular data?

Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.

What is re-indexing in pandas?

Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.

How will you create a series from dict in Pandas?

A Series is defined as a one-dimensional array that is capable of storing various data

import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)

How can we create a copy of the series in Pandas?

Use pandas.Series.copy method
import pandas as pd


What is groupby in Pandas?

GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and

What is vectorization in Pandas?

Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.

Different types of Data Structures in Pandas

Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.

What Is Time Series In pandas

A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.

How to convert pandas dataframe to numpy array?

The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.

Write a Pandas program to get the first 5 rows of a given DataFrame

>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]

Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data. 

>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])

Write a Python program to convert a Panda module Series to Python list and it’s type. 

>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())

Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2

Develop a Pandas program to compare the elements of the two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2

Develop a Pandas program to change the data type of given a column or a Series.

>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2

Write a Pandas program to convert Series of lists to one Series

>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)

Write a Pandas program to create a subset of a given series based on value and condition

>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s

Develop a Pandas code to alter the order of index in a given series

>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])

Write a Pandas code to get the items of a given series not present in another given series.

>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result

What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?

First one is a view of the original dataframe and second one is a copy of the original dataframe.

Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.

>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’

Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.

>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)

How will you add a column to a pandas DataFrame?

# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])

How to iterate over a Pandas DataFrame?

You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.


What type of language is python? Programming or scripting?

Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.

Is python case sensitive?

Yes, python is a case sensitive language.

What is a lambda function in python?

An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.

What is the difference between xrange and xrange in python?

xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.

What are docstrings in python?

Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.

Whenever Python exits, why isn’t all the memory deallocated?

Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.

What does this mean: *args, **kwargs? And why would we use it?

We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.

What is the difference between deep and shallow copy?

Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.

Define encapsulation in Python?

Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.

Does python make use of access specifiers?

Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.

What are the generators in Python?

Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.

Write a Python script to Python to find palindrome of a sequence

a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
print (“not palindrome”)

How will you remove the duplicate elements from the given list?

The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))

Does Python allow arguments Pass by Value or Pass by Reference?

Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.

What is slicing in Python?

Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.

Why is the “pass” keyword used in Python?

The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.

What are decorators in Python?

Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion

What is the key difference between lists and tuples in python?

The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.

What is self in Python?

Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.

What is PYTHONPATH in Python?

PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.

What is the difference between .py and .pyc files?

.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.

What is namespace in Python?

In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.

What is pickling and unpickling?

Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

How is Python interpreted?

Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.

Jupyter Notebook

What is the main use of a Jupyter notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.

How do I increase the cell width of the Jupyter/ipython notebook in my browser?

>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))

How do I convert an IPython Notebook into a Python file via command line?

>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb

How to measure execution time in a jupyter notebook?

>> %%time is inbuilt magic command

How to run a jupyter notebook from the command line?

>> jupyter nbconvert –to python nb.ipynb

How to make inline plots larger in jupyter notebooks?

Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)

How to display multiple images in a jupyter notebook?

>>for ima in images:

Why is the Jupyter notebook interactive code and data exploration friendly?

The ipywidgets package provides many common user interface controls for exploring code and data interactively.

What is the default formatting option in jupyter notebook?

Default formatting option is markdown

What are kernel wrappers in jupyter?

Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.

What are the advantages of custom magic commands?

Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.

Is the jupyter architecture language dependent?

No. It is language independent

Which tools allow jupyter notebooks to easily convert to pdf and html?

Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.

What is a major disadvantage of a Jupyter notebook?

It is very hard to run long asynchronous tasks. Less Secure.

In which domain is the jupyter notebook widely used?

It is mainly used for data analysis and machine learning related tasks.

What are alternatives to jupyter notebook?

PyCharm interact, VS Code Python Interactive etc.

Where can you make configuration changes to the jupyter notebook?

In the config file located at ~/.ipython/profile_default/

Which magic command is used to run python code from jupyter notebook?

%run can execute python code from .py files

How to pass variables across the notebooks in Jupyter?

The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)

Export the contents of a cell/Show the contents of an external script

Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.

What inbuilt tool we use for debugging python code in a jupyter notebook?

Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.

How to make high resolution plots in a jupyter notebook?

>> %config InlineBackend.figure_format =’retina’

How can one use latex in a jupyter notebook?

When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.

What is a jupyter lab?

It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.

What is the biggest limitation for a Jupyter notebook?

Code versioning, management and debugging is not scalable in current jupyter notebook

Cloud Computing

[appbox googleplay]

[appbox appstore id1560083470-iphone screenshots]

Which are the different layers that define cloud architecture?

Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus

Explain Cloud Service Models?

Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)

What are Hybrid clouds?

Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.

Explain Platform as a Service (Paas)?

It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.

What is the difference in cloud computing and Mobile Cloud computing?

Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.

What are the security aspects provided with the cloud?

There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications

What are system integrators in cloud computing?

System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.

What is the usage of utility computing?

Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.

What are some large cloud providers and databases?

Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL

Explain the difference between cloud and traditional data centers.

In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.

What is hypervisor in Cloud Computing?

It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.

Define what MultiCloud is?

Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.

What is a multi-cloud strategy?

The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.

What is meant by Edge Computing, and how is it related to the cloud?

Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.

What are disadvantages of SaaS cloud computing layer

1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.

What is IaaS in Cloud Computing?

IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.

Explain what is the use of “EUCALYPTUS” in cloud computing?

EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.

Name the foremost refined and restrictive service model?

The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.

Name all the kind of virtualization that are also characteristics of cloud computing?

Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.

What Are Main Features Of Cloud Services?

Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades

What Are The Advantages Of Cloud Services?

Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.

Mention The Basic Components Of A Server Computer In Cloud Computing?

The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.

What are the advantages of auto-scaling?

Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management

[appbox googleplay]

[appbox appstore id1560083470-iphone screenshots]

Azure Cloud

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep
#Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure
windows 10/11:
web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (

Which Services Are Provided By Window Azure Operating System?

Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management

Which service in Azure is used to manage resources in Azure?

Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.

Which  web applications can be deployed with Azure?

Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.

What are Roles in Azure and why do we use them?

Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal.
There are 3 types of roles in Microsoft Azure:
● Web Role
● Worker Role
● VM Role
Let’s discuss each of these roles in detail:
Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications.
Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website.
VM Role – The VM role is used by a user to schedule tasks and other windows services.
This role can be used to customize the machines on which the web and worker role is running.

What is Azure as PaaS?

PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.

What are Break-fix issues in Microsoft Azure?

In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.

Explain Diagnostics in Windows Azure

Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in
Windows Azure as well as in the computer’s emulator for collecting data for a role instance.

State the difference between repetitive and minimal monitoring.

Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application.
On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.

What is the main difference between the repository and the powerhouse server?

The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.

Explain command task in Microsoft Azure

Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.

What is the difference between Azure Service Bus Queues and Storage Queues?

Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues.
Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services.
Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.

Explain Azure Service Fabric.

Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications.
The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.

Define the Azure Redis Cache.

Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.

How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?

TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.

What are the options to manage session state in Windows Azure?

● Windows Azure Caching
● SQL Azure
● Azure Table

What is cspack?

It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.

What is csrun?

It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.

How to design applications to handle connection failure in Windows Azure?

The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).

What is Windows Azure Diagnostics?

Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.

What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?

Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.

What is the use of Azure Active Directory?

Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network

Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?

This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.

What are virtual machine scale sets in Azure?

Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.

Are data disks supported within scale sets?

Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include:
● Azure files (SMB shared drives)
● OS drive
● Temp drive (local, not backed by Azure Storage)
● Azure data service (for example, Azure tables, Azure blobs)
● External data service (for example, remote database)

What is the difference between the Windows Azure Platform and Windows Azure?

The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.

What are the three main components of the Windows Azure Platform?

Compute, Storage and AppFabric.

Can you move a resource from one group to another?

Yes, you can. A resource can be moved among resource groups.

How many resource groups a subscription can have?

A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.

Explain the fault domain.

This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.

Differentiate between the repository and the powerhouse server?

Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.

Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

AWS Cloud

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Explain what S3 is?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”

What is AMI?

AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.

Mention what the relationship between an instance and AMI is?

From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

How many buckets can you create in AWS by default?

By default, you can create up to 100 buckets in each of your AWS accounts.

Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again

Explain what T2 instances is?

T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.

In VPC with private and public subnets, database servers should ideally be launched into which subnet?

With private and public subnets in VPC, database servers should ideally launch into private subnets.

Mention what the security best practices for Amazon EC2 are?

For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI

Is the property of broadcast or multicast supported by Amazon VPC?

No, currently Amazon VPI not provide support for broadcast or multicast.

How many Elastic IPs is allows you to create by AWS?

5 VPC Elastic IP addresses are allowed for each AWS account.

Explain default storage class in S3

The default storage class is a Standard frequently accessed.

What are the Roles in AWS?

Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.

What are the edge locations?

Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.

Explain snowball?

Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.

What is a redshift?

Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.

What is meant by subnet?

A large section of IP Address divided into chunks is known as subnets.

Can you establish a Peering connection to a VPC in a different region?

Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.

What is SQS?

Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

What is Amazon EMR?

EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.

What is boot time taken for the instance stored backed AMI?

The boot time for an Amazon instance store-backend AMI is less than 5 minutes.

Do you need an internet gateway to use peering connections?

Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.

How to connect an EBS volume to multiple instances?

We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.

What are the different types of Load Balancer in AWS services?

Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer

In which situation you will select provisioned IOPS over standard RDS storage?

You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.

What are the important features of Amazon cloud search?

Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice

What is AWS CDK?

AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
• Create and provision AWS infrastructure deployments predictably and repeatedly.
• Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling.
• Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
• Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.

What are best practices for controlling acccess to AWS CodeCommit?

– Create your own policy
– Provide temporary access credentials to access your repo
* Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod
* Federated access
* Multi-factor authentication

What is AWS CodeCobuild?

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.

AWS DevOps CodeBuild

DevOps How Does AWS CodeBuild Works

1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file.
2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build.
3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository.
4- Build artifacts produced are uploaded into an Amazon S3 bucket.
5- he build environment sends a notification about the build status.
6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.

What is AWS CodeDeploy?

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.

With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.

You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.

You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.

AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision

What is AWS CodeCommit?

AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.

What is AWS Opswork?

AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.

Chef and Puppet enable you to use code to automate your configurations.

AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.

AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.

AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep

Google Cloud Platform

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

What are the main advantages of using Google Cloud Platform?

Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
 information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform. 

Why should you opt to Google Cloud Hosting?

The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability

What are the libraries and tools for cloud storage on GCP?

At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage

There are many third party libraries and tools such as Boto Library.

What do you know about Google Compute Engine?

Google Cloud Engine is the basic component of the Google Cloud Platform. 
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.

How are the Google Compute Engine and Google App Engine related?

Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage

How does the pricing model work in GCP cloud?

While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store.
The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network. 

What are the different methods for the authentication of Google Compute Engine API?

This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API:
– Using OAuth 2.0
– Through client library
– Directly with an access token

List some Database services by GCP.

There are many Google cloud database services which helps many enterprises to manage their data.
● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud.
● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity.
● Cloud Spanner
● Cloud Bigtable
● Firestore
● Firebase Realtime Database
● Memorystore
● Google Cloud Partner Services
● For more database products you can refer Google Cloud Databases
● For more data base solutions you can refer Google cloud Database solutions

What are the different Network services by GCP?

Google Cloud provides many Networking services and technologies that make easy to scale and manage your network.
● Hybrid connectivity helps to connect your infrastructure to Google Cloud
● Virtual Private Cloud (VPC) manage networking for your resources
● Cloud DNS is a highly available global domain naming system (DNS) network.
● Service Directory provides a service-centric network solution.
● Cloud Load Balancing
● Cloud CDN
● Cloud Armor
● Cloud NAT
● Network Telemetry
● VPC Service Controls
● Network Intelligence Center
● Network Service Tiers
● For more about Networking products refer Google Cloud Networking

List some Data Analytics service by GCP.

Google Cloud offers various Data Analytics services.
● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective.
● Looker
● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud.
● Dataflow
● Pub/Sub
● Cloud Data Fusion
● Data Catalog
● Cloud Composer
● Google Data Studio
● Dataprep
● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale.
● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services.
● For Google Cloud analytics services visit Data Analytics

Explain Google BigQuery in Google Cloud Platform

For traditional data warehouse, hardware setup replacement is required. In such case, Google
BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.

Explain Auto-scaling in Google cloud computing

Without human intervention, you can mechanically provision and initiate new instances in AWS.
Depending on various metrics and load, Auto-scaling is triggered.

Describe Hypervisor in Google Cloud Platform

Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.

Define VPC in the Google cloud platform

VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep


Steve Nouri

Online Interview Questions

programming with

Don’t do a connection setup per RPC.

Cache things wherever possible.

Write asynchronous code wherever possible.

Exploit eventual consistency wherever possible. Otherwise known as, coordination is expensive so don’t do it unless you have to.

Route your requests sensibly.

Locate processing wherever will result in the best latency. That might mean you need more resources.

Use LIFO queues, they have better tail statistics than FIFO. Queue before load balancing, not after, that way a small fraction of slow requests are much less likely to stall all the processors. Source: Andrew mc Gregor

What operating system do most servers use in 2022?

Of the 1500 *NIX servers under my control (a very large fortune 500 company), 90% of them are Linux. We have a small amount of HP-UX and AIX left over running legacy applications, but they are being phased out. Most of the applications we used to run on HP-UX and AIX (SAP, Oracle, you-name-it) now run on Linux. And it’s not just my company, it’s everywhere.

In 2022, the most widely used server operating system is Linux. Source: Bill Thompson

How do you load multiple files in parallel from an Amazon S3 bucket?

By specifying a file prefix of the file names in the COPY command or specifying the list of files to load in a manifest file.

How can you manage the amount of provisioned throughput that is used when copying from an Amazon DynamoDB table?

Set the READRATIO parameter in the COPY command to a percentage of unused throughput.

What you must do to use client-side encryption with your own encryption keys when using COPY to load data files that were uploaded to Amazon S3?

You must add the master key value to the credentials string with the ENCRYPTED parameter in the COPY command.

DevOps  and SysOps Breaking News – Top Stories – Jobs

  • Hey what to learn in Linux ???
    by /u/Akhil_767 (Everything DevOps) on July 19, 2024 at 9:24 am

    Hi everyone i want to learn linux for devops but have no clue what to actually learn in it i went to and the linux path was super overwhelming please anyone can tell me what to actually learn in it . submitted by /u/Akhil_767 [link] [comments]

  • clean overlay2 docker
    by /u/Long-Ice-9621 (Everything DevOps) on July 19, 2024 at 9:01 am

    Hello, Is there any safe way to clean overlay2 ? It's label studio docker image running using this command: docker run -d -it -e LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true -p 8080:8080 -v $(pwd)/mydata:/label-studio/data heartexlabs/label-studio:latest please find more informations here: root@vps-8b5453ed:/var/lib/docker/overlay2# du -sh * | sort -rh | head -n 5 129G 2cba2b496509f78e63274c6b9bcff18eb43c0fbe55c06ecfc684a1f883902aa3 836M ad32b03c1cae41ae7d2efc8973e5636c06602f78bc9af14b77a620965e914854 589M 10a810c769edf7e59c57d58cc693ac845fbf36ece72e46f55bcc8a9d07169b27 387M 74c3016cbdaad24ba3b5a58bb15ceba3e1130755dc8f17c343d0e6ba8a903637 156M m8yt2pa29iegnnlkp7zewi17g root@vps-8b5453ed:/var/lib/docker/overlay2# submitted by /u/Long-Ice-9621 [link] [comments]

  • Really hate crowdstrike right now...
    by /u/Hermit_Bottle (Everything DevOps) on July 19, 2024 at 8:47 am

    Anybody else stuck in a call? submitted by /u/Hermit_Bottle [link] [comments]

  • Slow rendering some website pages and images
    by /u/samsuthar (Everything DevOps) on July 19, 2024 at 7:34 am

    Hi r/devops community, I run a blog called thenextscoop, and I face issues with some of the pages and images rendering very slowly, and some of them even fail to load. Is there any solution where I can check the website's health, latency and uptime? Earlier, I used a few browser Chrome extensions, but they did not give the right data in real-time. I will be happy if any community can help me here... I appreciate it in advance. submitted by /u/samsuthar [link] [comments]

  • How Do You Automate Your Status Pages?
    by /u/aniketwdubey (Everything DevOps) on July 19, 2024 at 6:51 am

    Hi r/devops community, I'm looking for advice and best practices on automating status pages for monitoring service health and notifying users of outages or performance issues. Specifically, I'm considering using Instatus to create and manage our status page. Here's a bit of background: I'm running multiple Kubernetes services, and I want each service to have its own component on the status page. The goal is to automate the process of updating the status (Operational, Partial Outage, Major Outage, Degraded Performance, Under Maintenance) for each service. Before I dive into implementing anything, I wanted to ask: How do you automate your status pages? What tools and processes do you use? Any tips or best practices for integrating Kubernetes with a status page tool like Instatus? I appreciate any insights or feedback! submitted by /u/aniketwdubey [link] [comments]

  • Developing a VS Code extension in 5 minute
    by /u/reillyse (Everything DevOps) on July 19, 2024 at 4:39 am

    Ever wonder how a VS Code extension works? Here is a code-along 5 minute speed run developing a VS Code extension for a CI tool submitted by /u/reillyse [link] [comments]

  • Anyone else sitting here waiting for Azure to come back up?
    by /u/OmegaNine (Everything DevOps) on July 19, 2024 at 12:17 am

    Been hours now, we are currently trying to move 25TB of data from one cloud hosting to another while hoping Azure Central US comes back up. submitted by /u/OmegaNine [link] [comments]

  • Introducing Nativelink -- the 'blazingly fast' Rust-built open-source remote execution server & build cache powering 1B+ monthly requests (Join us for AMA!)
    by /u/nativelink (Everything DevOps) on July 18, 2024 at 11:39 pm

    submitted by /u/nativelink [link] [comments]

  • DevOps for industrial automation - SCADA, PLC controllers and the like (rant and a question)
    by /u/adamski234 (Everything DevOps) on July 18, 2024 at 8:47 pm

    /* ===== OPENING RANT ===== * Hope you enjoy my writing * It provides context for the question * But it is not required to understand it * Skip to the next comment like this if you don't care 🙁 * ======================== */ I got hired as the IT team at a small company two weeks ago. I'm not even out of university and I'm already an entire engineering department, cool. We do mainly PV substations, construction and maintenance; but also home automation, power grid connections and the like. Since the company is small (less than 10 people) I also do the gritty industrial stuff, both in the office and on-site, in addition to being a code monkey and the like. Hailing from the software engineering world, I have a very particular take on the process of creating stuff. I have a nice modern code editor with bells and whistles, variable names are long, I write tests, commit changes to a VCS, run tests, maybe even automate running tests. Sometimes I even automate deployments! There's also the project management side - GitHub issues, projects, checklists, TODOs in code and out of code. Libraries are well documented (usually), or at the very least, I can look at the code. Imagine the whiplash I got when I opened the SCADA software we use. It's older than me, the documentation is impenetrable (or maybe I just don't get it), and one of the main protocols is broken (though we don't know who's responsible, both implementors blame each other). Support for automating away boilerplate is almost non-existent. Did you know you can use AutoHotkey as an ad-hoc "code" generator? It's really neat! The SCADA uses JS as its scripting language. The engine has probably not been updated since I was born. It does not even have standard types, so you have to learn a custom String type. The system also uses a proprietary data format - it's a bunch of XML nodes glued with binary data. You can manually edit that, but only to a limited extent. Okay, so maybe it'll get better when we get to safety-critical systems. After all, they better not fail. It would be quite unwise to, I dunno, not disconnect a 500 kW PV plant when the protection and management controller loses power, wouldn't it? This is not an edge case, right? You probably don't want the grid to exist in an unmanaged state. Wouldn't it be unfortunate if this specific scenario- who am I kidding, this happened today. No damage done, beyond a reputation hit, because it was done during project hand-off to the owner. Testing is done by finishing the contract and hoping nothing will explode or catch fire. I don't think the editor even supports tests, nor can I really check because it's proprietary software and a second copy has not yet been bought. Version management does not exist. It's just not a thing. I copied the SCADA design file I was working on to my local computer, renamed it to include the feature I was working on and I upload it to the main server when I'm done for the day, with a README.txt in which I describe what's been done and what's left to do. I fuck something up? I better remember what I did, or revert to yesterday's copy entirely. Editor history is sketchy at best. Merging changes from two different people to two different things? You wish. One of us will need to manually copy the changes to their project. What changes? God himself could not diff those files and neither can I. Project management? Done with notebooks. Sometimes. By some people. Usually we just wing it. I don't know what others are doing, and they don't know what I'm doing. I started writing READMEs, but I don't think this will catch on. /* ===== QUESTION TIME ===== * I apologize for any incoherence * I am currently kinda sick * And also falling asleep * Hope it was enjoyable anyway * ========================= */ How would you go about DevOps for industrial automation? I can use Git with a self-hosted frontend for tracking changes, but that's not really enough. Files are in a semi-binary format, so a standard diff isn't the right tool, and merging will basically have to not be done. I'm thinking of rolling a custom tool, specifically for working with those files, both for diffing and merging, but that will require reverse-engineering the file format and also plenty of time. Is there anything that can be done in the meantime? What about testing? I've read a paper about a similar situation, and, inspired by it, I'm considering hooking up the controller to a simulated IO device and either rolling my own or adapting an existing test harness to use real world hardware. What about deployment? I think it's done rarely enough that doing it manually is fine, but still, not having to do that would be cool. Would something like Ansible work here? If you have had experience dealing with similar systems, could you share any tips, mistakes you have made and the like? Feel free to be imaginative - I have very few workflows and people to fight. Securing funding might be difficult, but I'm willing to give wacky ideas a shot. This isn't really fitting here, so feel free to point me in the direction of more fitting subreddits, but I already wrote an essay so who cares if it's a bit longer. Any suggestions regarding automating some parts of the workflow? AutoHotkey really does help, but it's flimsy. One wrong press of a button and you entered a string of commands that did god knows what. Changing parameters requires opening a code editor and replacing strings. Is there anything short of rev-engineering the custom file format that would allow me to not do the same thing 30 times in a row? submitted by /u/adamski234 [link] [comments]

  • Sysadmin here - do you manage your software yourself or let admins do it?
    by /u/myg0t_Defiled (Everything DevOps) on July 18, 2024 at 8:20 pm

    Hello, Sysadmin here, currently updating software via SCCM, to get rid of some vulnerabilities. I've noticed that a lot of dev & devops users do not update their software (docker, python etc). Since I'm a sysadmin, I'm more than happy to do it for you in bulk, but I'm aware that developer apps are very delicate and can break when updating. So my question is - would you rather prefer to receive an email, giving you a month to update your apps (after that time, it's my time to shine) or you don't care and want admins to do it for you? I realize the first option may not work, as probably a lot of people would just ignore an email. All thoughts appreciated, thanks. submitted by /u/myg0t_Defiled [link] [comments]

  • Documentation
    by /u/TheNightCaptain (Everything DevOps) on July 18, 2024 at 6:34 pm

    Shout out to y'all who spent hours writing those support documentation tasks which will never be read and stashed away in confluence until the end of time. Peace out homies. submitted by /u/TheNightCaptain [link] [comments]

  • Career Advice Network Engineer -> Software / CloudDev / DevOps
    by /u/ApprehensiveExit5520 (Everything DevOps) on July 18, 2024 at 5:06 pm

    Good day, Looking for the advice for the above. Essentially I am currently in a Helpdesk role with a company and looking at paths to further my career. Preferably, the end goal would be for a remote position, however, that is not a requirement. Current certification is primarily CCNA, of which I am pursuing my Cisco DevNet as well. I've played around a bit with some software development, with a small number of languages, as well docker which i find rather fascinating. So not 100% on which path would work best for me, however, I am still researching what each position entails and would, of possible hear from people in similar roles already, who wouldn't , mind offering some guidance. I have considered looking into a BSc in Computer Science from the University of London, however, with my current age, (31) I'm not sure how feasible that would be. Any and all advice, suggestions, opinions are welcome. submitted by /u/ApprehensiveExit5520 [link] [comments]

  • What are some practices you follow to reduce cloud infra cost ?
    by /u/kaka1309 (Everything DevOps) on July 18, 2024 at 2:05 pm

    I have been advised to look at cloud cost across aws and gcp and it’s wild. Anything you do which helps you control the infra costs ? These are demo or test envs btw the resources with most cost are Compute engines Cloud filestore Kubernetes engine RDS submitted by /u/kaka1309 [link] [comments]

  • want to start my career asap. pls help
    by /u/hikariiiiii_ (Everything DevOps) on July 18, 2024 at 1:34 pm

    School starts in august. Im a first year in bachelor of science in information systems. altough a little bit far fetched, i would like to take on the software development path. i want to carve my future as early as now. what should i do? here are my plans as of the moment: learn a lot in my first year. gather enough knowledge to start my own portfolio Do part time jobs related to software development in my remaining years in college (would also like to try some web development along the way) Start big when i graduate since i accumulated experience already Is this a solid foundation? or no? also how do i even get a software development related job if i dont have experience? i would be thnkful for any suggestions and opinions! submitted by /u/hikariiiiii_ [link] [comments]

  • What are some of your life hacks when it comes to DevOps? Share your tricks
    by /u/ReverendRou (Everything DevOps) on July 18, 2024 at 1:13 pm

    It was only recently where I learnt about firefox containers. Really cool feature that allows me to have multiple different AWS Accounts open at the same time. I used to have to have different browsers open for this. Good documentation is also a good one. I try to document pretty much everything I do. That way whenever I get stuck, I hopefully have a note somewhere the helps. I also always have a tab open on the far left of my browser for ChatGPT. Really interested to hear any tips you all have for getting a tiny bit further in your day to day work. submitted by /u/ReverendRou [link] [comments]

  • Who deploys and manages API Gateway
    by /u/Kitchen_Major_3810 (Everything DevOps) on July 18, 2024 at 12:58 pm

    Folks - I have a question on API gateway usage. Who actually uses API gateways? Who sets it up and manages it? Is it Infrastructure engineer who sets it up and manages it? And devs use it to configure routes ? submitted by /u/Kitchen_Major_3810 [link] [comments]

  • Why deploy Argo components (Workflows, ArgoCD, Events, Rollouts) in different namespaces?
    by /u/jakepage91 (Everything DevOps) on July 18, 2024 at 10:10 am

    I'm looking to stand up the full Argo ecosystem in a test cluster to try out the full Argo flavoured CICD system. In the Argo docs each component is to be deployed in it's own namespace: ArgoCD -> `argocd` namespace Argo Workflows -> `argo` namespaces Argo Events -> `argo-events` namespace Argo Rollouts -> `argo-rollouts` namespace Why is this? Wouldn't it make more sense to deploy them all in an `argo-system` ns for example? If anybody has any experience deploying the different components to a single common namesapce I would love to hear about your experience. submitted by /u/jakepage91 [link] [comments]

  • Triggering alerts for PrometheusRules in a multi cluster setup
    by /u/Beinish (Everything DevOps) on July 18, 2024 at 8:28 am

    We deployed kube-prom-stack in a multi cluster setup where Thanos is deployed on the observability cluster and can see all of the rules we configure. Thanos does this by reading a path we provide it. Just for context we're doing something like this: Deploy Thanos with Terraform and provide it with a values file: ... ... values = [ "${templatefile("chart_thanos_values.yml", { cluster_name = local.cluster_name, ... ... ... alerts = join("\n", [ for fn in fileset("", "./monitoring-rules/prometheus/*.yml") : file(fn) ]) }) }" ] } Then in the values file with pass alerts: ruler: enabled: true clusterName: ${env}-ruler alertmanagers: - http://prometheus-kube-prometheus-alertmanager:9093 config: |- groups: ${indent(6, alerts)} Up until recently we never PrometheusRules manually, we only defined them in that ./monitoring-rules/prometheus/ folder. For testing purposes I created a rule manually on our staging cluster. The rule is visible in Thanos (obsrv cluster) and it's even in firing state, but Alertmanager doesn't pick up the rule, so we're not getting the alert. I'm pretty new to Prometheus and maybe I'm missing something, but how do I make my Alertmanager see these rules? Eventually we're planning on creating multiple rules on different clusters, but either it's not possible with our current config, or I'm just not doing something right. I tried moving to Grafana Alerts. It sees all of the alerts, including the manual ones, it sees that it's firing but I wasn't able to make them fire from Grafana's alert manager. It seems like it's not possible for Grafana to alert on non-Grafana-managed rules. Any help would be appreciated. submitted by /u/Beinish [link] [comments]

  • Dependency Track not showing components (and vulnerabilities )for some SBOMs made with Syft
    by /u/hithereimigor (Everything DevOps) on July 18, 2024 at 7:29 am

    I'm using Dependency Track to monitor for vulnerabilities on multiple systems. I create an SBOM in CycloneDX 1.6 format using Syft and then import the SBOM into Dependency Track. The problem is that for some systems I upload the SBOM and the system accepts it without complaining that something is wrong but then nothing happens. The component list just stays empty and nothing is shown. For other systems doing the same works just fine. Any ideas what could be wrong? submitted by /u/hithereimigor [link] [comments]

  • Build server specifications
    by /u/edenroz (Everything DevOps) on July 18, 2024 at 7:14 am

    Hi everyone, I'm planning to set up a new on-premise build server for our development team and could use some advice on the specifications. Here are the key details and requirements: Project Details: Type of Projects: A mix of C++ and C# Number of Developers: Around 15 developers. Build Frequency: Multiple builds per day, with CI/CD pipelines (AzureDevOps). Expected Load: Simultaneous builds for different PR Current Specifications: vCPU: VM_1 24, VM_2 24, VM_3 24 RAM: VM_1 24, VM_2 24, VM_3 16 OS: Windows Server The 3 VMs works in a VSphere cluster under VMware. The pyshical machine is shared with testing and PO environments. We woulkd like to build a dedicated build server. Currently the total build process of a PR takes 1 hour. Some application builds on VM_1, some on VM_2 and others on VM_3 In my wettest dreams I'll love a docker configuration. Anyway, we would like to decrease the PR time down to 10-20 minutes. Additional Information: Budget: Open to suggestions, but looking for a balance between performance and cost-efficiency. They actually asked for 2 tiers: a mid-tier solution and a beefy solution. Scalability: Should be able to scale with increased load in the future. Other Requirements: Suggestions for backup solutions, redundancy, or any other considerations would be appreciated. Any recommendations or experiences you can share would be incredibly helpful. Thanks in advance! submitted by /u/edenroz [link] [comments]

Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read

#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)