DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts

You can translate the content of this page by selecting a language in the select box.

DevOps Interviews Question and Answers and Scripts

Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).

Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:

Deployment

What is a Canary Deployment?

A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.

Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.

DevOps Interviews Question and Answers and Scripts
AWS Developer Associate DVA-C01 Exam Prep
Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep #Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648 android: https://play.google.com/store/apps/dev?id=4679760081477077763 windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0 web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)

What is a Blue Green Deployment?

Reference: Blue Green Deployment


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.

How to a software release?

There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker

How to automate the whole build and release process?

• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
manufacturing.
The above process will be done by Jenkins by creating the jobs.

Did you ever participated in Prod Deployments? If yes what is the procedure?

• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system

Devops Tools and Concepts

What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?

By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.

We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.


Achieve AWS Solutions Architect Associate Certification with Confidence: Master SAA Exam with the Latest Practice Tests and Quizzes illustrated

The key aspects or principles behind DevOps are:

  • Infrastructure as a Code
  • Continuous Integration
  • Continuous Deployment
  • Automation
  • Continuous Monitoring
  • Security

Popular tools for DevOps are:

"Become a Canada Expert: Ace the Citizenship Test and Impress Everyone with Your Knowledge of Canadian History, Geography, Government, Culture, People, Languages, Travel, Wildlife, Hockey, Tourism, Sceneries, Arts, and Data Visualization. Get the Top 1000 Canada Quiz Now!"


  • Git
  • AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
  • Jenkins
  • Ansible
  • Puppet
  • Nagios
  • Docker
  • ELK (Elasticsearch, Logstash, Kibana)

Can we consider DevOps as Agile methodology?

Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.

What are some of the most popular DevOps tools?
Selenium
Puppet
Chef
Git
Jenkins
Ansible
Docker

What is the job Of HTTP REST API in DevOps?

As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.

What is Scrum?

Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team

What are Micro services, and how they control proficient DevOps rehearses?

Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.

Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!

Microsoft Azure AZ900 Certification and Training

What is Continuous Delivery?

Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.

Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.

Devops Continuous Integration vs Continuous delivery

Why Automate?

Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.

What is Puppet?

Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.


What is Configuration Management?

Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.

Software Configuration Management Features are:

• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes

What are the Some Of the Most Popular Devops Tools ?

• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible

What Are the Vagrant And Its Uses?

Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.

What’s a PTR in DNS?

Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.


Ace AWS Cloud Practitioner Exam Certification with this book

What testing is necessary to insure a new service is ready for production?

Continuous testing

What is Continuous Testing?

It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.

What are the key elements of continuous testing?

Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.

How does HTTP work?

The HTTP protocol  works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that  respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.

What is IaC? How you will achieve this?

Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.

Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.

What are patterns and anti-patterns of software delivery and deployment?

What are patterns and anti-patterns of software delivery and deployment?

What are Microservices?

Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.

  • Small
  • Decoupled
  • Owned by self-contained teams

Version Control

What is a version control system?

Version Control System (VCS) is a software that helps software developers to work together and maintain
  complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN

What is Source Control?

An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.

Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can

Unlock the Secrets of Africa: Master African History, Geography, Culture, People, Cuisine, Economics, Languages, Music, Wildlife, Football, Politics, Animals, Tourism, Science and Environment with the Top 1000 Africa Quiz and Trivia. Get Yours Now!

• Collaborate on code with your team.

• Isolate your work until it is ready.

. Quickly troubleshoot issues by identifying who made changes and what the changes were.

Source code management systems help streamline the development process and provide a centralized source for all your code.

What is Git and explain the difference between Git and SVN?

Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.

GIT SVN
Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Git contains the local repo as well as the full history of the whole project on all the developers hard drive, so if there is a server outage , you can easily do recovery from your team mates local git repo. SVN relies only on the central server to store all the versions of the project file
Push and pull operations are fast Push and pull operations are slower compared to Git
It belongs to 3rd generation Version Control Tool It belongs to 2nd generation Version Control tools
Client nodes can share the entire repositories on their local system Version history is stored on server-side repository
Commits can be done offline too Commits can be done only online
Work are shared automatically by commit Nothing is shared automatically

Describe branching strategies?

Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.

Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.

Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.

What are Pull requests?

Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements.
Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.

Linux

What is the default file permissions for the file and how can I modify it?

Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666

What is a  kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.

What is difference between grep -i and grep -v?

i ignore alphabet difference v accept this value
Example:  ls | grep -i docker
Dockerfile
docker.tar.gz
ls | grep -v docker
Desktop
Dockerfile
Documents
Downloads
You can’t see anything with name docker.tar.gz

How can you define particular space to the file?

This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1

What is concept of sudo in linux?

Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.

What are the checks to be done when a Linux build server become suddenly slow?

Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time

Jenkins

What is Jenkins?

Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team

What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool

What is continuous integration?

When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

Devops: Continuous Integration

Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.

The microservices architecture is a design approach to build a single application as a set of small services.

What are the advantages of Jenkins?

• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.

Which SCM tools does Jenkins supports?

Source code management tools supported by Jenkins are below:
• AccuRev
• CVS
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase
• RTC

I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs

In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.

How can I integrate all the tools with Jenkins?

I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.

How to install Jenkins via Docker?

The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:

• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard 
• Accessing the Jenkins/Blue Ocean Docker container:

docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:

docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash

Bash – Shell scripting

Write a shell script to add two numbers

echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”

How to get a file that consists of last 10 lines of the some other file?

Tail -10 filename >filename

How to check the exit status of the commands?

echo $?

How to get the information from file which consists of the word “GangBoard”?

grep “GangBoard” filename

How to search the files with the name of “GangBoard”?

find / -type f -name “*GangBoard*”

Write a shell script to print only prime numbers?

DevOps script to print prime numbers

How to pass the parameters to the script and how can I get those parameters?

Scriptname.sh parameter1 parameter2
Use  $* to get the parameters.

Monitoring – Refactoring

My application is not coming up for some reason? How can you bring it up?

We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.

What is multifactor authentication? What is the use of it?

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.

• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform

I want to copy the artifacts from one location to another location in cloud. How?

Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.

How to  delete 10 days older log files?

find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null

Ansible

What are the Advantages of Ansible?

• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural

What’s the use of Ansible?

Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.

What are the Pros and Cons of Ansible?

Pros:
1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
Cons:
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage

What is the difference among chef, puppet and ansible?

Ansible Supports Windows but server should be Linux/Unix YAML (Python) Single Active Node
Chef Puppet
Interoperability Works Only on Linux/Unix Works Only on Linux/Unix
Configuration Laguage Uses Ruby Pupper DSL
Availability Primary Server and Backup Server Multi Master Architecture

How to access variable names in Ansible?

Using hostvars method we can access and add the variables like below

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}

Docker

What is Docker?

Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.

What is Docker image?

Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.

What is a Docker Container?

Docker Container is the running instance of Docker Image

How to stop and restart the Docker container?

To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+

Cloud:
• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace

Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows

What are the tools used for docker networking?

For docker networking we generally use kubernets and docker swarm.

What is docker compose?

Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.

How to deploy docker container to aws?

Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.

What is the fundamental disservice of Docker holders?

As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.

What are the docker motor and docker form?

Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP

What are the Different modes does a holder can be run?

Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.

What the yield of docker assess order will be?

Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type

What is docker swarm?

Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.

What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?

Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.

How to Version control Docker pictures?

Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.

What is difference between docker image and docker container?

Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.

What is Application Containerization?

It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.

What is the syntax for building docker image?

docker build –f -t imagename:version

What is the running docker image?

docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version

How to log into a container?

docker exec –it /bin/bash

Git

What does the commit object contain?

Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).

Explain the difference between git pull and git fetch?

Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge

How do we know in Git if a branch has already been merged into master?

git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged

What is ‘Staging Area’ or ‘Index’ in GIT?

Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add

What is Git Stash?

Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.

What is Git stash drop?

Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log

What is the function of ‘git config’?

Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global user.name “Your Name”: This command will add your username.
git config –global user.email “Your E-mail Address”: This command will add your email id.

How can you create a repository in Git?

To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.

What language is used in Git?

Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.

What is SubGit?

SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.

How can you clone a Git repository via Jenkins?

First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.

What are the advantages of using Git?

1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.

What is git add?

It adds the file changes to the staging area

What is git commit? 

Commits the changes to the HEAD (staging area)

What is git push?

Sends the changes to the remote repository

What is git checkout?

Switch branch or restore working files

What is git branch?

Creates a branch

What is git fetch?

Fetch the latest history from the remote server and updates the local repo

What is git merge?

Joins two or more branches together

What is git pull?

Fetch from and integrate with another repository or a local branch (git fetch + git merge

What is git rebase?

Process of moving or combining a sequence of commits to a new base commit

What is git revert?

To revert a commit that has already been published and made public

What is git clone?

Clones the git repository and creates a working copy in the local machine

How can I modify the commit message in git?

I have to use following command and enter the required message.
Git commit –amend

How you handle the merge conflicts in git

Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.

What is Git command to send the modifications to the master branch of your remote repository

Use the command “git push origin master”

NOSQL

What are the benefits of NoSQL database on RDBMS?

Benefits:
1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected

Maven

What is Maven?

Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories

Numpy

What is Numpy

There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc. 

Why is python numpy better than lists?

Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.

Describe the map function in Python?

The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.

How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy

###

import numpy as np
a=np.random.rand(100)
print(type(a))
print(a)
###
 

will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1
python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1

How to count the occurrence of each value in a numpy array?

Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.

Ouput: [4 1 1 1 2 3 0 0 0 2]

Does Numpy Support Nan?

nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax
computers.

What does ravel() function in numpy do? 

It combines multiple numpy arrays into a single array

How to remove from one array those items that exist in another? 

>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])

How to reverse a numpy array in the most efficient way?

>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]

How to calculate percentiles when using numpy?

>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)

Output:  37.400000000000006

What Is The Difference Between Numpy And Scipy?

NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.

What Is The Preferred Way To Check For An Empty (zero Element) Array?

For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size

What Is The Difference Between Matrices And Arrays?

Matrices can only be two-dimensional, whereas arrays can have any number of
 dimensions

How can you find the indices of an array where a condition is true?

Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)

How to find the maximum and minimum value of a given flattened array?

>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)

Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis.

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)

Find median of a numpy flattened array

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)

Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis

>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x

Calculate covariance matrix between two numpy arrays

>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)

Compute product-moment correlation coefficients of two given numpy arrays

>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)

Develop a numpy program to compute the histogram of nums against the bins

>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)

Get the powers of an array values element-wise

>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)

Write a NumPy program to get true division of the element-wise array inputs

>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)

Panda

What is a series in pandas?

A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.

What features make Pandas such a reliable option to store tabular data?

Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.

What is re-indexing in pandas?

Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.

How will you create a series from dict in Pandas?

A Series is defined as a one-dimensional array that is capable of storing various data
types.

import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)

How can we create a copy of the series in Pandas?

Use pandas.Series.copy method
import pandas as pd
pd.Series.copy(deep=True)

 

What is groupby in Pandas?

GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and
easy.

What is vectorization in Pandas?

Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.

Different types of Data Structures in Pandas

Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.

What Is Time Series In pandas

A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.

How to convert pandas dataframe to numpy array?

The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.

Write a Pandas program to get the first 5 rows of a given DataFrame

>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]

Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data. 

>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])

Write a Python program to convert a Panda module Series to Python list and it’s type. 

>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())

Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2

Develop a Pandas program to compare the elements of the two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2

Develop a Pandas program to change the data type of given a column or a Series.

>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2

Write a Pandas program to convert Series of lists to one Series

>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)

Write a Pandas program to create a subset of a given series based on value and condition

>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s

Develop a Pandas code to alter the order of index in a given series

>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])

Write a Pandas code to get the items of a given series not present in another given series.

>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result

What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?

First one is a view of the original dataframe and second one is a copy of the original dataframe.

Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.

>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’

Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.

>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)

How will you add a column to a pandas DataFrame?

# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])

How to iterate over a Pandas DataFrame?

You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.

Python

What type of language is python? Programming or scripting?

Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.

Is python case sensitive?

Yes, python is a case sensitive language.

What is a lambda function in python?

An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.

What is the difference between xrange and xrange in python?

xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.

What are docstrings in python?

Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.

Whenever Python exits, why isn’t all the memory deallocated?

Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.

What does this mean: *args, **kwargs? And why would we use it?

We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.

What is the difference between deep and shallow copy?

Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.

Define encapsulation in Python?

Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.

Does python make use of access specifiers?

Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.

What are the generators in Python?

Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.

Write a Python script to Python to find palindrome of a sequence

a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
else:
print (“not palindrome”)

How will you remove the duplicate elements from the given list?

The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))

Does Python allow arguments Pass by Value or Pass by Reference?

Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.

What is slicing in Python?

Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.

Why is the “pass” keyword used in Python?

The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.

What are decorators in Python?

Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion

What is the key difference between lists and tuples in python?

The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.

What is self in Python?

Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.

What is PYTHONPATH in Python?

PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.

What is the difference between .py and .pyc files?

.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.

What is namespace in Python?

In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.

What is pickling and unpickling?

Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

How is Python interpreted?

Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.

Jupyter Notebook

What is the main use of a Jupyter notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.

How do I increase the cell width of the Jupyter/ipython notebook in my browser?

>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))

How do I convert an IPython Notebook into a Python file via command line?

>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb

How to measure execution time in a jupyter notebook?

>> %%time is inbuilt magic command

How to run a jupyter notebook from the command line?

>> jupyter nbconvert –to python nb.ipynb

How to make inline plots larger in jupyter notebooks?

Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)

How to display multiple images in a jupyter notebook?

>>for ima in images:
>>>plt.figure()
>>>plt.imshow(ima)

Why is the Jupyter notebook interactive code and data exploration friendly?

The ipywidgets package provides many common user interface controls for exploring code and data interactively.

What is the default formatting option in jupyter notebook?

Default formatting option is markdown

What are kernel wrappers in jupyter?

Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.

What are the advantages of custom magic commands?

Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.

Is the jupyter architecture language dependent?

No. It is language independent

Which tools allow jupyter notebooks to easily convert to pdf and html?

Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.

What is a major disadvantage of a Jupyter notebook?

It is very hard to run long asynchronous tasks. Less Secure.

In which domain is the jupyter notebook widely used?

It is mainly used for data analysis and machine learning related tasks.

What are alternatives to jupyter notebook?

PyCharm interact, VS Code Python Interactive etc.

Where can you make configuration changes to the jupyter notebook?

In the config file located at ~/.ipython/profile_default/ipython_config.py

Which magic command is used to run python code from jupyter notebook?

%run can execute python code from .py files

How to pass variables across the notebooks in Jupyter?

The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)

Export the contents of a cell/Show the contents of an external script

Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.

What inbuilt tool we use for debugging python code in a jupyter notebook?

Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.

How to make high resolution plots in a jupyter notebook?

>> %config InlineBackend.figure_format =’retina’

How can one use latex in a jupyter notebook?

When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.

What is a jupyter lab?

It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.

What is the biggest limitation for a Jupyter notebook?

Code versioning, management and debugging is not scalable in current jupyter notebook

Cloud Computing

[appbox googleplay com.cloudeducation.free]

[appbox appstore id1560083470-iphone screenshots]

Which are the different layers that define cloud architecture?

Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus

Explain Cloud Service Models?

Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)

What are Hybrid clouds?

Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.

Explain Platform as a Service (Paas)?

It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.

What is the difference in cloud computing and Mobile Cloud computing?

Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.

What are the security aspects provided with the cloud?

There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications

What are system integrators in cloud computing?

System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.

What is the usage of utility computing?

Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.

What are some large cloud providers and databases?

Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL

Explain the difference between cloud and traditional data centers.

In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.

What is hypervisor in Cloud Computing?

It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.

Define what MultiCloud is?

Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.

What is a multi-cloud strategy?

The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.

What is meant by Edge Computing, and how is it related to the cloud?

Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.

What are disadvantages of SaaS cloud computing layer

1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.

What is IaaS in Cloud Computing?

IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.

Explain what is the use of “EUCALYPTUS” in cloud computing?

EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.

Name the foremost refined and restrictive service model?

The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.

Name all the kind of virtualization that are also characteristics of cloud computing?

Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.

What Are Main Features Of Cloud Services?

Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades

What Are The Advantages Of Cloud Services?

Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.

Mention The Basic Components Of A Server Computer In Cloud Computing?

The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.

What are the advantages of auto-scaling?

Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management

[appbox googleplay com.cloudeducation.free]

[appbox appstore id1560083470-iphone screenshots]

Azure Cloud

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep
#Azure #AZ104 #AzureAdmnistrator #AzureDevOps #AzureAdmin #AzureTraining #AzureSysAdmin #AzureCloud #LearnAzure
ios: https://apps.apple.com/ca/app/azure-administrator-az104-prep/id1565167648
android: https://play.google.com/store/apps/dev?id=4679760081477077763
windows 10/11: https://www.microsoft.com/en-ca/store/p/azure-administrator-az-104-certification-practice-tests-pro/9nb7w5wpx8f0
web: AWS Certified Solution Architect Associate Exam Prep: Multilingual (azurefundamentalsexamprep.com)

Which Services Are Provided By Window Azure Operating System?

Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management

Which service in Azure is used to manage resources in Azure?

Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.

Which  web applications can be deployed with Azure?

Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.

What are Roles in Azure and why do we use them?

Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal.
There are 3 types of roles in Microsoft Azure:
● Web Role
● Worker Role
● VM Role
Let’s discuss each of these roles in detail:
Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications.
Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website.
VM Role – The VM role is used by a user to schedule tasks and other windows services.
This role can be used to customize the machines on which the web and worker role is running.

What is Azure as PaaS?

PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.

What are Break-fix issues in Microsoft Azure?

In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.

Explain Diagnostics in Windows Azure

Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in
Windows Azure as well as in the computer’s emulator for collecting data for a role instance.

State the difference between repetitive and minimal monitoring.

Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application.
On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.

What is the main difference between the repository and the powerhouse server?

The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.

Explain command task in Microsoft Azure

Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.

What is the difference between Azure Service Bus Queues and Storage Queues?

Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues.
Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services.
Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.

Explain Azure Service Fabric.

Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications.
The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.

Define the Azure Redis Cache.

Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.

How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?

TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.

What are the options to manage session state in Windows Azure?

● Windows Azure Caching
● SQL Azure
● Azure Table

What is cspack?

It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.

What is csrun?

It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.

How to design applications to handle connection failure in Windows Azure?

The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).

What is Windows Azure Diagnostics?

Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.

What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?

Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.

What is the use of Azure Active Directory?

Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network

Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?

This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.

What are virtual machine scale sets in Azure?

Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.

Are data disks supported within scale sets?

Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include:
● Azure files (SMB shared drives)
● OS drive
● Temp drive (local, not backed by Azure Storage)
● Azure data service (for example, Azure tables, Azure blobs)
● External data service (for example, remote database)

What is the difference between the Windows Azure Platform and Windows Azure?

The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.

What are the three main components of the Windows Azure Platform?

Compute, Storage and AppFabric.

Can you move a resource from one group to another?

Yes, you can. A resource can be moved among resource groups.

How many resource groups a subscription can have?

A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.

Explain the fault domain.

This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.

Differentiate between the repository and the powerhouse server?

Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.

Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

AWS Cloud

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep
AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Explain what S3 is?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”

What is AMI?

AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.

Mention what the relationship between an instance and AMI is?

From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

How many buckets can you create in AWS by default?

By default, you can create up to 100 buckets in each of your AWS accounts.

Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again

Explain what T2 instances is?

T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.

In VPC with private and public subnets, database servers should ideally be launched into which subnet?

With private and public subnets in VPC, database servers should ideally launch into private subnets.

Mention what the security best practices for Amazon EC2 are?

For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI

Is the property of broadcast or multicast supported by Amazon VPC?

No, currently Amazon VPI not provide support for broadcast or multicast.

How many Elastic IPs is allows you to create by AWS?

5 VPC Elastic IP addresses are allowed for each AWS account.

Explain default storage class in S3

The default storage class is a Standard frequently accessed.

What are the Roles in AWS?

Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.

What are the edge locations?

Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.

Explain snowball?

Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.

What is a redshift?

Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.

What is meant by subnet?

A large section of IP Address divided into chunks is known as subnets.

Can you establish a Peering connection to a VPC in a different region?

Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.

What is SQS?

Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

What is Amazon EMR?

EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.

What is boot time taken for the instance stored backed AMI?

The boot time for an Amazon instance store-backend AMI is less than 5 minutes.

Do you need an internet gateway to use peering connections?

Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.

How to connect an EBS volume to multiple instances?

We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.

What are the different types of Load Balancer in AWS services?

Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer

In which situation you will select provisioned IOPS over standard RDS storage?

You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.

What are the important features of Amazon cloud search?

Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice

What is AWS CDK?

AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
• Create and provision AWS infrastructure deployments predictably and repeatedly.
• Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling.
• Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
• Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.

What are best practices for controlling acccess to AWS CodeCommit?

– Create your own policy
– Provide temporary access credentials to access your repo
* Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod
* Federated access
* Multi-factor authentication

What is AWS CodeCobuild?

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.

AWS DevOps CodeBuild

DevOps How Does AWS CodeBuild Works

1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file.
2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build.
3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository.
4- Build artifacts produced are uploaded into an Amazon S3 bucket.
5- he build environment sends a notification about the build status.
6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.

What is AWS CodeDeploy?

AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.

You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.

With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.

You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.

You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.

AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision

What is AWS CodeCommit?

AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.

What is AWS Opswork?

AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.

Chef and Puppet enable you to use code to automate your configurations.

AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.

AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.

AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software

AWS Developer Associate DVA-C01 Exam Prep
AWS Developer Associate DVA-C01 Exam Prep

Google Cloud Platform

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

What are the main advantages of using Google Cloud Platform?

Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
 information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
service
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform. 

Why should you opt to Google Cloud Hosting?

The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability

What are the libraries and tools for cloud storage on GCP?

At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
buckets
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage

There are many third party libraries and tools such as Boto Library.

What do you know about Google Compute Engine?

Google Cloud Engine is the basic component of the Google Cloud Platform. 
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.

How are the Google Compute Engine and Google App Engine related?

Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage
system.

How does the pricing model work in GCP cloud?

While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store.
The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network. 

What are the different methods for the authentication of Google Compute Engine API?

This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API:
– Using OAuth 2.0
– Through client library
– Directly with an access token

List some Database services by GCP.

There are many Google cloud database services which helps many enterprises to manage their data.
● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud.
● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity.
● Cloud Spanner
● Cloud Bigtable
● Firestore
● Firebase Realtime Database
● Memorystore
● Google Cloud Partner Services
● For more database products you can refer Google Cloud Databases
● For more data base solutions you can refer Google cloud Database solutions

What are the different Network services by GCP?

Google Cloud provides many Networking services and technologies that make easy to scale and manage your network.
● Hybrid connectivity helps to connect your infrastructure to Google Cloud
● Virtual Private Cloud (VPC) manage networking for your resources
● Cloud DNS is a highly available global domain naming system (DNS) network.
● Service Directory provides a service-centric network solution.
● Cloud Load Balancing
● Cloud CDN
● Cloud Armor
● Cloud NAT
● Network Telemetry
● VPC Service Controls
● Network Intelligence Center
● Network Service Tiers
● For more about Networking products refer Google Cloud Networking

List some Data Analytics service by GCP.

Google Cloud offers various Data Analytics services.
● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective.
● Looker
● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud.
● Dataflow
● Pub/Sub
● Cloud Data Fusion
● Data Catalog
● Cloud Composer
● Google Data Studio
● Dataprep
● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale.
● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services.
● For Google Cloud analytics services visit Data Analytics

Explain Google BigQuery in Google Cloud Platform

For traditional data warehouse, hardware setup replacement is required. In such case, Google
BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.

Explain Auto-scaling in Google cloud computing

Without human intervention, you can mechanically provision and initiate new instances in AWS.
Depending on various metrics and load, Auto-scaling is triggered.

Describe Hypervisor in Google Cloud Platform

Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.

Define VPC in the Google cloud platform

VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.

GCP Associate Cloud Engineer Exam Prep
GCP Associate Cloud Engineer Exam Prep

References

Steve Nouri

https://www.edureka.co

https://www.kausalvikash.in

https://www.wisdomjobs.com

https://blog.edugrad.com

https://stackoverflow.com

http://www.ezdev.org

https://www.techbeamers.com

https://www.w3resource.com

https://www.javatpoint.com

https://analyticsindiamag.com

Online Interview Questions

https://www.geeksforgeeks.org

https://www.springpeople.com

https://atraininghub.com

https://www.interviewcake.com

https://www.techbeamers.com

https://www.tutorialspoint.com

programming with mosh.com

https://www.interviewbit.com

https://www.guru99.com

https://hub.packtpub.com

https://analyticsindiamag.com

https://www.dataquest.io

https://www.infoworld.com

Don’t do a connection setup per RPC.

Cache things wherever possible.

Write asynchronous code wherever possible.

Exploit eventual consistency wherever possible. Otherwise known as, coordination is expensive so don’t do it unless you have to.

Route your requests sensibly.

Locate processing wherever will result in the best latency. That might mean you need more resources.

Use LIFO queues, they have better tail statistics than FIFO. Queue before load balancing, not after, that way a small fraction of slow requests are much less likely to stall all the processors. Source: Andrew mc Gregor

What operating system do most servers use in 2022?

Of the 1500 *NIX servers under my control (a very large fortune 500 company), 90% of them are Linux. We have a small amount of HP-UX and AIX left over running legacy applications, but they are being phased out. Most of the applications we used to run on HP-UX and AIX (SAP, Oracle, you-name-it) now run on Linux. And it’s not just my company, it’s everywhere.

In 2022, the most widely used server operating system is Linux. Source: Bill Thompson

How do you load multiple files in parallel from an Amazon S3 bucket?

By specifying a file prefix of the file names in the COPY command or specifying the list of files to load in a manifest file.
 

How can you manage the amount of provisioned throughput that is used when copying from an Amazon DynamoDB table?

Set the READRATIO parameter in the COPY command to a percentage of unused throughput.
 

What you must do to use client-side encryption with your own encryption keys when using COPY to load data files that were uploaded to Amazon S3?

You must add the master key value to the credentials string with the ENCRYPTED parameter in the COPY command.

DevOps  and SysOps Breaking News – Top Stories – Jobs

  • KubeStellar QuickStart now available
    by /u/andan02 (Everything DevOps) on May 30, 2023 at 5:13 pm

    submitted by /u/andan02 [link] [comments]

  • Do you sometimes also feel like you're too slow?
    by /u/AemonQE (Everything DevOps) on May 30, 2023 at 4:58 pm

    Small rant. 2nd year in devops - working for a company built by devs for devs. I had enough grit to learn to be able to build solutions in Python, Java and Go (or whatever other scripting language) in a decent manner. You throw a problem at me and I fix it. Have an idea? No problem - I'll make it happen. Still. They make me feel like I'm too slow, like I'm not respected because of my ops background - but I think that in reality the tasks I get are novel enough to become slogs and need quite a bit of planning, experimentation and creativity to be finished. And more often than not some help from GPT to simplify and optimize code. Every project is a context change for me and has more often than not never been done in our environment - most of the time using new technologies - and they still get angry that I'm just a tad faster than our junior devs (and I myself am a junior, find the error). Next to that my focus is in stability - theirs is in getting it done yesterday. Doesn't work as expected? Pff... just debug it 100 times till it works together with the devs. Why do it right the first time? The ones that actually think like me are my sysadmin friends. They understand me and my worries. They know that we have to make blood sacrifices to the observability gods (as an example). But for real now, what is going on? I don't have a handful of techs I use for every single project because I'm specialized in doing one thing every day and because my solutions are routine. I don't have any framework I can use as crooks or any mentor to fall back on if the project is novel to everyone. Ok, no, even for the simplest stuff I don't have anyone to ask. I don't even have a teammate and have to handle 30 devs. I even do my own task planning and whatever else you need to do to keep the ball rolling in an efficient manner. Is there someone else here in a similar situation? Any thoughts? submitted by /u/AemonQE [link] [comments]

  • Lost WMI outbound after OS upgrade to 2019
    by /u/Hoping_i_Get_poached (Sysadmin) on May 30, 2023 at 4:45 pm

    Hi, I have a legacy server that I'm migrating PowerShell script jobs away from, but there are a few remaining. After upgrading its 2012 R2 server OS to 2019, WMI commands in my scripts that reach out to other LAN systems started failing with access denied. WinRM works fine. There is a warning message from my app event log, a ton of these several times daily: A provider, [provider name], has been registered in the Windows Management Instrumentation namespace [Name Space] to use the LocalSystem account. This account is privileged and the provider may cause a security violation if it does not correctly impersonate user requests. I think the problem is this DCOM hardening change MS forced--let's assume so. I do plan to move commands over to use CIM eventually, but my question: Is there any way to workaround this? Or is WMI just done... 🙁 Or is there some security upgrade I can make to get it working again, for now? Has anyone else upgraded server OS and lost outbound WMI? submitted by /u/Hoping_i_Get_poached [link] [comments]

  • Galera Cluster newb questions
    by /u/jimboslice_007 (Sysadmin) on May 30, 2023 at 4:23 pm

    I work for a medical group that currently uses token ring replication for mysql servers and things are not in great shape. One option presented by the software vendor was switching to a Galera cluster. I've never tried clustering mysql before. Are there any gotcha's with this sort of setup? Since it's medical, they are concerned about uptime and data integrity, so I just want to make sure I have a good plan for the typical things that can go wrong, so if you have experience with this sort of setup and can point me towards things that I should learn about, I'd really appreciate it. submitted by /u/jimboslice_007 [link] [comments]

  • Anyone Else Catch This?
    by /u/Bluetooth_Sandwich (Sysadmin) on May 30, 2023 at 4:23 pm

    submitted by /u/Bluetooth_Sandwich [link] [comments]

  • Video Conferencing, Elderly Executives, and Unreasonable Expectations
    by /u/Nihlithian (Sysadmin) on May 30, 2023 at 4:05 pm

    Just needed to get something off my chest after receiving an interesting request today. I work for a smaller organization with one main location and a few remote employees scattered around the United States. We rarely have visitors in our main location, so communication with clients is done completely through email and video conferencing. Most of our consultants and executives are well beyond the age of retirement. The only exception is my boss, who is the CTO. We've had this issue where the CEO struggles to handle Teams, Zoom, or Webex calls. Sometimes he unplugs his headset, or mutes himself and thinks his entire computer is broken. It's gotten to the point where I'm called up to his office every time he has a video call to assist him. Naturally, this is becoming extremely disruptive for the projects that I'm working on. The worst thing is when you're working on something time sensitive, and the CEO says he wants you to sit in the conference room during his 1 hour meeting because he's worried something might go wrong. (Something rarely goes wrong once the call starts). Today was the peak of the insanity. I was called into his office where he sat me down and explained that other companies have seamless, error free video conferencing systems. He had just gotten off a call with one of our remote consultants (who is in her late 70s) and she had intermittent connectivity issues, and he needs a solution so that we no longer have issues using Zoom, Webex, or Teams ever again. I told him I would email my boss, as she may know a solution that I don't. She's on vacation until tomorrow, so I just wrote her an email, knowing full well that there really isn't a solution other than requiring consultants to understand the tools of their job. To me, it feels like we're calling the mechanic because the fork lift operator doesn't know how to lift a pallet. The other problem is that the CEO won't really listen to any solution I have. Due to my age (late 20s), when I tell him no, he assumes I just don't know the proper solution due to inexperience. When my boss tells him no, he'll actually believe her as she's older than I am. Have you guys struggled with something like this recently? I figured with the move towards remote work, I feel like I'm not alone when it comes to unreasonable requests like this. submitted by /u/Nihlithian [link] [comments]

  • Advice for new employee in a (so far) toxic work environment
    by /u/burnpitman (Everything DevOps) on May 30, 2023 at 4:03 pm

    Started a new devops job a few weeks ago as a new college grad and it isn't going particularly well. The organization has a pretty restrictive environment with barely any public facing services which means that doing anything on the network requires knowledge of the internal architect to get anything done. Issue is is my counterparts are supposed to be helping me and guiding me through this environment but it's not going well at all. Simple questions are often left with with one word answers that don't explain what I need help with. It's seems as if they are annoyed with me for asking simple things, but how am I supposed to know where the test certificates are or what the authentication is for a server when there is zero documentation. I am often sitting at my desk for hours rerunning the same commands while they watch my command history on my account just to bring it up later on in a meeting or something to have a laugh. Ive been assigned a simple app update as my first solo task and while I know the general steps of redeploying the app with the update, I am hitting every small road block you can think of because of the architecture. I can't get anything answered for me like how should I bring up this database, how do I auth with our repo, how do I access server X, why can't I hit this webpage, etc etc. I came from an internship of great people who genuinely wanted to help me. And this team even seems fine outside of these 3 bozos. Problem is, the rest of the team isnt DevOps but instead SW Engineers they can't help much. I feel like I'm in a place where not knowing is offensive, and the stress just doesn't feel worth it. Of course I moved to a small town where the industry knows each other. Any advice from someone who has maybe been here before? I'm losing my mind submitted by /u/burnpitman [link] [comments]

  • 0auth2 issues with curl script in Python
    by /u/isuckatit1000 (Everything DevOps) on May 30, 2023 at 3:35 pm

    Hi, I'm new to 0auth2 and using it to get creds for REST APIs. I fill in this info and drop the script in my Windows CLI and the script just drops with no value return. I try to run 0auth2 in Postman and I get the below output. Any ideas? {"detail":[{"loc":["body","grant_type"],"msg":"field required","type":"value_error.missing"}]} ​ curl -X POST https://HIDDEN.com \ -H "Accept: application/json" \ -H "API-Token: <INSERT API TOKEN>" \ -u "<INSERT CLIENT ID>:<INSERT CLIENT SECRET>" \ -d "grant_type=client_credentials" submitted by /u/isuckatit1000 [link] [comments]

  • Message Encryption rabbit hole
    by /u/HolyCowEveryNameIsTa (Sysadmin) on May 30, 2023 at 3:16 pm

    We've been using Exchange Online for ~5 years now and have seen message encryption evolve from OME to now Purview. And now it's gotten to the point of just figuring out how and where to configure message encryption is like trying to decipher the riddle of the sphynx. We want to automatically encrypt any message with an attachment and had a rule setup that did this for us, but now it seems to be applying a DLP label(RMS) to the underlying attachments so when recipients open an excel spreadsheet they are met with with "You are not signed in to Office with an account that has permission to open this workbook. You may sign in a new account into Office that has permission or request permission from x@domain.com" I think the issue is with whatever RMS template is being applied to the message is also applying some kind of sensitivity label to the underlying attachment. I can't seem to figure out where the heck I can edit that label or template. The official MS docs seem to indicate that I need to go to the Azure Information Protection panel in Azure to edit these but when I go there it says "Labeling and policy management in the Azure portal reached end-of-life on April 1, 2021" and I need to migrate to Unified Labeling. When I search for AIP unified labeling Microsoft says that will be retired in 2024 and to use Built-in labeling. FFS. I just want to encrypt outbound messages with attachments and not encrypt or apply labels to the underlying attachments. I'm assuming Purview is now smashing all of the RMS/DLP/Encryption products into one but how the hell am I supposed to edit those rules/templates/labels is beyond me. Anyone have similar issues and figure out WTF MS wants from us? submitted by /u/HolyCowEveryNameIsTa [link] [comments]

  • Understanding Metrics, Events, Logs and Traces - Key Pillars of Observability
    by /u/PrathameshSonpatki (Everything DevOps) on May 30, 2023 at 3:01 pm

    https://last9.io/blog/understanding-metrics-events-logs-traces-key-pillars-of-observability/ submitted by /u/PrathameshSonpatki [link] [comments]

  • Windows 10 for Enterprise and Education will reach EOL October 14, 2025. What's your plan?
    by /u/letsgoiowa (Sysadmin) on May 30, 2023 at 3:00 pm

    https://learn.microsoft.com/en-us/lifecycle/announcements/windows-10-22h2-end-of-support-update Unless I am misunderstanding this (please let me know if I am), it appears there will be no more security or feature updates past 10/14/2025. This poses some interesting problems if true: We have many devices that are not due to be replaced until after that date. Those machines will either need an out-of-band replacement or will need to somehow survive a W11 upgrade. We will need to be installing W11 on our new devices ASAP to minimize the amount of chaos in the future. We will need to validate W11 for our images and compatibility. Since it's basically still 10 under the hood, everything I've heard says it's about 99% fine. Any tips for validating an OS upgrade like this? Any particular issues you've run into with W11 yet that we need to be aware of? submitted by /u/letsgoiowa [link] [comments]

  • How to prepare for DevOps Engineer Technical Interview and scenario based questions?
    by /u/aditya_dhopade (Everything DevOps) on May 30, 2023 at 2:55 pm

    I am having 2 YOE (Currently on a Career Gap). I am currently looking to get into DevOps. Started learning AWS, Docker Kubernetes, Shell Scripting but the technical interview seems to be more overwhelming and focused on troubleshooting the scenarios? How to effectively prepare for those ? What are the tools that one must know before entering into DevOps scene. submitted by /u/aditya_dhopade [link] [comments]

  • 'ekscli' vs. 'aws eks'
    by /u/Omni-Fitness (Everything DevOps) on May 30, 2023 at 2:41 pm

    I see on https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html you can wither use the GUI, ekscli, or aws cli to manage your cluster and interactions. ekscli looks neat, but I imagine I will also need to use the normal aws eks style due to other aws command line options (e.g. aws sts). What tool would you recommend getting framliar with and why? submitted by /u/Omni-Fitness [link] [comments]

  • Paxton Net2 /Entry issue
    by /u/ImpeccableAnnoyance (Sysadmin) on May 30, 2023 at 2:36 pm

    I work in a school with 6 Paxton ACUs, a Paxton Entry panel on the gate, and a standard monitor connected to one of the ACU's in reception. All worked well when we were on a flat LAN... We've moved to a managed Cisco Meraki network and to begin with, the only issue was a delay in the image appearing on the monitor after someone called from the entry panel. Now we get nothing at all in terms of audio and video. We get a placeholder image and are able to open the game for the visitor but no xommunication. I've moved the panel to the same switch as the ACU/monitor but no difference. The IPs on the ACUs are all reserved ips on the correct VLAN and work fine communicating to the server for normal door functions. The server has an IP on the 'Door access' VLAN but for some reason, the registry entry for the hostname in the paxton access registry keys always has one of the servers other/non door access vlan ips allocated. Everything else works fine. Does the image transmission go through the server or do the panel and monitor connect directly to each other? If anyone has any ideas of what the issue could be, is appreciate any help. submitted by /u/ImpeccableAnnoyance [link] [comments]

  • Everyone is an "engineer"
    by /u/whole_sum (Sysadmin) on May 30, 2023 at 2:27 pm

    Looking through my email I got a recruiter trying to find a "Service Delivery Engineer". Now what the hell would that be? I don't know. According to Google- "The role exists to ensure that the company consistently delivers, and the customer consistently receives, excellent service and support." Sounds a lot like customer service rep to me. What is up with this trend of calling every role an engineer??? What's next the "Service Delivery Architect"? I get that it's supposedly used to distinguish expertise levels, but that can be done without calling everything an engineer (jr/sr, level 1,2,3, etc.). It's just dumb IMO. Just used to fluff job titles and give people over-inflated opinions of themselves, and also add to the bullshit and obscurity in the job market. Edit: Technically, my job title also has "engineer" in it... but alas, I'm not really an engineer. Configuring and deploying appliances/platforms isn't really engineering I don't think. One could make the argument that engineer's design and build things as the only requirement to be an engineer, but in that case most people would be a very "high level" abstraction of what an engineer used to be, using pre-made tools, or putting pre-constructed "pieces" together... whereas engineers create those tools, or new things out of the "lowest level" raw material/component... ie, concrete/mortar, pcb/transistor, software via your own packages/vanilla code... ya know /rant submitted by /u/whole_sum [link] [comments]

  • "Changing ISP" Checklist
    by /u/IndyPilot80 (Sysadmin) on May 30, 2023 at 2:17 pm

    Looks like we are on our last leg with Comcast and will be moving to something else. I'm getting a checklist together. Here is what I have so far: Check external services - Only one we have is our VOIP system. No VPN. Check to see if any 3rd party services need our public IP O365 (non-hybrid) - Update Mail Flow > Connector IP address Check SPF records Check internal DNS to make sure nothing is pointing to the old IP Check firewall rules Am I missing anything? submitted by /u/IndyPilot80 [link] [comments]

  • Telco Technician looking to move into Network Engineering and similar fields.
    by /u/AllDaGouda (Sysadmin) on May 30, 2023 at 1:57 pm

    Hello Everyone, I am currently a Telecommunications technician at a data center and have been working on trying to move my career foreward. I think I am mostly looking for any ideas, tips and tricks, and/or a curriculum on how to best make the move into network engineering or a similar field. I love the trouble shooting part of my job but the long drives to my work place and the physical wear and tear on my body are starting to beat me down. I have an extremely strong knowledge on anything layer 1 and have been working on passing a JNCIA cert. However I feel that may be a waste of time. I am toying with the idea of reaching out to a recruiter and seeing what they could find me and see if any companies are willing to take on someone like me to train up. I was originally a fiber splicer technician working ISP and OSP, until I got my current job. However I feel my current title is very undescriptive of what is expected of me. I work closely with our in house Cloud, Neteng, and syseng teams and have picked up small bits of knowledge along the way. I've attempted to move into the department but management of my department has a hard time working well with others and is seeming to be blocking me. I would love to get work that can be WFH at least part of the time as I now have a one year old and want to be around more than I currently am. ( I work 5 days most of the time, 10 hours with 4-5 hours of travel depending on traffic. ) Any and all advice would be welcome. Thank you all so much. TLDR: Telecommunications technician with layer 1 experience looking to move into the Networking field. Looking for thoughts and opinions on how to make the move. submitted by /u/AllDaGouda [link] [comments]

  • sonarqube 9.9 LTS error
    by /u/Maleficent-Pain2765 (Everything DevOps) on May 30, 2023 at 1:27 pm

    Hi Team , we are testing sonar 9.9 LTS and encountered this error message in jenkins console, even i trying to pass java home to 11 still same issue , also on jenkins console output if i echo java -version it says openjdk version "11.0.16" 2022-07-19 mvn sonar:sonar -Dsonar.qualitygate.wait=true -Dsonar.host.url=http://xx.xx.xx.xx:9000 -Dsonar.java.jdkHome=/data/jenkins_home/tools/jdk-11 failed: An API incompatibility was encountered while executing org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar: java.lang.UnsupportedClassVersionError: org/sonar/batch/bootstrapper/EnvironmentInformation has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0 submitted by /u/Maleficent-Pain2765 [link] [comments]

  • How to handle office-wide OS changes?
    by /u/Altus- (Sysadmin) on May 30, 2023 at 1:14 pm

    Hi everyone, I am a solo sysadmin for roughly 60 users across two sites and I am in the process of migrating all workstations from MacOS to Windows. Due to budget constraints, our migration is slow. We have ~80 workstations and started replacing one every month in July of last year. The reason this is relevant is that we are going to have a mix of MacOS and Windows for a while and processes can't just be switched over. Here are a few questions that I have and any advice would be greatly appreciated: Because the office is primarily Mac-based, domain administration tools (AD, GPO, etc.) have never really played a major role except for email (on-prem Exchange server). This gives me the perfect opportunity to rework the domain setup to my liking regarding policies and organization. How have you approached this in the past? Some of our users have only ever worked on a Mac so they would need training right from the basics on working with Windows. How have you handled user training on the new OS? Are there any good user guides out there that cover Windows 11 from the basics and would be easy to navigate for tech-illiterate users? Due to the sometimes huge process changes, I find that a lot of users will try to tweak the new processes to emulate their MacOS experience, often making their Windows experience a lot more complicated and increasing frustration. How have you helped users adopt new processes and help them see that the new processes, although different, are more efficient and will make it easier for them to do their job? I know this is a pretty lengthy post, but I really appreciate any responses to my above questions. EDIT 1: Workstations are currently being purchased at a rate of 1 per month to ensure that we have enough room in the budget for any emergency expenditures if needed. At our fiscal year-end, we then purchase as many workstations as possible depending on any surplus that we have. EDIT 2: I greatly appreciate all the input that was provided by everyone in the comments and will take everything said to heart and continue to try to push my org in the right direction. I am changing the flair of this post to "solved". However, I find that I've been repeating myself in the comments, so I'm adding the following statement for clarity: There is not going to be a change in our core infrastructure regarding on-prem vs cloud. This is due to a number of reasons beyond our organization's control with budget being the primary factor. This is an industry-wide problem in our province coming down directly from the provincial government and while change is coming, it's very slow to happen and we most likely won't see major benefits of these changes for the next 2-3 years. Please understand that if I could change things I would, but I can't and I love everything else about my job so I am not looking to switch anytime soon. ​ submitted by /u/Altus- [link] [comments]

  • The best way to arrange RAID on 24 disks in new storage?
    by /u/Koksikicai2i2737632 (Sysadmin) on May 30, 2023 at 12:50 pm

    New storage MSA 2060. 24 disks of 1.8 TB. Since 5 disks is recommended for RAID 5. And 6 disks for RAID 6. What would be the best way to set this up for perfomance? Maybe 5 disks in RAID 5, and then with the left over 4 diska put it in raid 10 or 6? Or Maybe 4 x 6 disks in RAID 6? submitted by /u/Koksikicai2i2737632 [link] [comments]

error: Content is protected !!