Download the AI & Machine Learning For Dummies App: iOS - Android
DevOps Interviews Question and Answers and Scripts
Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).
A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke. The initial steps for implementing canary deployment are: 1. create two clones of the production environment, 2. have a load balancer that initially sends all traffic to one version, 3. create new functionality in the other version. When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem. Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.
Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.
Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live, and Green is idle. As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle. This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.
There are some steps to follow. • Create a check list • Create a release branch • Bump the version • Merge release branch to master & tag it. • Use a Pull request to merge the release merge • Deploy master to Prod Environment • Merge back into develop & delete release branch • Change log generation • Communicating with stack holders • Grooming the issue tracker
How to automate the whole build and release process?
• Check out a set of source code files. • Compile the code and report on progress along the way. • Run automated unit tests against successful compiles. • Create an installer. • Publish the installer to a download site, and notify teams that the installer is available. • Run the installer to create an installed executable. • Run automated tests against the executable. • Report the results of the tests. • Launch a subordinate project to update standard libraries. • Promote executables and other files to QA for further testing. • Deploy finished releases to production environments, such as Web servers or CD manufacturing. The above process will be done by Jenkins by creating the jobs.
Did you ever participated in Prod Deployments? If yes what is the procedure?
• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine • The specifications regarding the clustering of systems • How all these stand-alone boxes were going to talk to each other in a foolproof manner • Production setup should be documented to bits. It needs to be neat, foolproof, and understandable. • It should have all a system configurations, IP addresses, system specifications, & installation instructions. • It needs to be updated as & when any change is made to the production environment of the system
Devops Tools and Concepts
What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?
By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure. DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.
We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.
Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.
What are some of the most popular DevOps tools? Selenium Puppet Chef Git Jenkins Ansible Docker
What is the job Of HTTP REST API in DevOps?
As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT, Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.
What is Scrum?
Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team
What are Micro services, and how they control proficient DevOps rehearses?
Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration. So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.
What is Continuous Delivery?
Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible. During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.
Continuous delivery is a software development practice whereby code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment, production environment, or both after the build stage.
Why Automate?
Developers/administrators usually must provision their infrastructure manually. Rather than relying on manually steps, both administrators and developers can instantiate infrastructure using configuration files. Infrastructure as code (IaC) treats these configuration files as software code. You can use these files to produce a set of artifacts, namely the compute, storage, network, and application services that comprise an operating environment. Infrastructure as Code eliminates configuration drift through automation, thereby increasing the speed and agility of infrastructure deployments.
What is Puppet?
Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.
What is Configuration Management?
Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.
Software Configuration Management Features are:
• Enforcement • Cooperating Enablement • Version Control Friendly • Enable Change Control Processes
What are the Some Of the Most Popular Devops Tools ?
Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine. Vagrant is a tool that can created and managed environments for the testing and developing software.
Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.
What testing is necessary to insure a new service is ready for production?
Continuous testing
What is Continuous Testing?
It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.
Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.
How does HTTP work?
The HTTP protocol works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.
What is IaC? How you will achieve this?
Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.
Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration.
What are patterns and anti-patterns of software delivery and deployment?
What are Microservices?
Microservices are an architectural and organizational approach that is composed of small independent services optimized for DevOps.
Small
Decoupled
Owned by self-contained teams
Version Control
What is a version control system?
Version Control System (VCS) is a software that helps software developers to work together and maintain complete history of their work. Some of the feature of VCS as follows: • Allow developers to wok simultaneously • Does not allow overwriting on each other changes. • Maintain the history of every version. There are two types of Version Control Systems: 1. Central Version Control System, Ex: Git, Bitbucket 2. Distributed/Decentralized Version Control System, Ex: SVN
What is Source Control?
An important aspect of CI is the code. To ensure that you have the highest quality of code, it is important to have source control. Source control is the practice of tracking and managing changes to code. Source control management (SCM) systems provide a running history of code development and help to resolve conflicts when merging contributions from multiple sources.
Source control basics Whether you are writing a simple application on your own or collaborating on a large software development project as part of a team, source control is a vital component of the development process. With source code management, you can track your code change, see a revision history for your code, and revert to previous versions of a project when needed. By using source code management systems, you can
• Collaborate on code with your team.
• Isolate your work until it is ready.
. Quickly troubleshoot issues by identifying who made changes and what the changes were.
Source code management systems help streamline the development process and provide a centralized source for all your code.
What is Git and explain the difference between Git and SVN?
Git is a source code management (SCM) tool which handles small as well as large projects with efficiency. It is basically used to store our repositories in remote server such as GitHub.
GIT
SVN
Git is a Decentralized Version Control Tool
SVN is a Centralized Version Control Tool
Git contains the local
repo as well as the full
history of the whole
project on all the developers hard drive,
so if there is a server
outage , you can easily
do recovery from your
team mates local git
repo.
SVN relies only on the
central server to store all the versions of the
project file
Push and pull
operations are fast
Push and pull
operations are slower
compared to Git
It belongs to
3rd generation Version
Control Tool
It belongs to
2nd generation Version
Control tools
Client nodes can share
the entire repositories
on their local system
Version history is
stored on server-side
repository
Commits can be done
offline too
Commits can be done
only online
Work are shared
automatically by
commit
Nothing is shared
automatically
Describe branching strategies?
Feature branching This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.
Task branching In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.
Release branching Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have progressed since the release was initiated earlier.
What are Pull requests?
Pull requests are a common way for developers to notify and review each other’s work before it is merged into common code branches. They provide a user-friendly web interface for discussing proposed changes before integrating them into the official project. If there are any problems with the proposed changes, these can be discussed and the source code tweaked to satisfy an organization’s coding requirements. Pull requests go beyond simple developer notifications by enabling full discussions to be managed within the repository construct rather than making you rely on email trails.
Linux
What is the default file permissions for the file and how can I modify it?
Default file permissions are : rw-r—r— If I want to change the default file permissions I need to use umask command ex: umask 666
What is a kernel?
A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.
What is difference between grep -i and grep -v?
i ignore alphabet difference v accept this value Example: ls | grep -i docker Dockerfile docker.tar.gz ls | grep -v docker Desktop Dockerfile Documents Downloads You can’t see anything with name docker.tar.gz
How can you define particular space to the file?
This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then, dd if=/dev/zero of=/swapfile1 bs=1G count=1
What is concept of sudo in linux?
Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.
What are the checks to be done when a Linux build server become suddenly slow?
Perform a check on the following items: 1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications. 2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues. 3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time
Jenkins
What is Jenkins?
Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team
What is the difference between Maven, Ant and Jenkins?
Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool
What is continuous integration?
When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.
Continuous integration is a software development practice whereby developers regularly merge their code changes into a central repository, after which automated builds and tests are run.
The microservices architecture is a design approach to build a single application as a set of small services.
What are the advantages of Jenkins?
• Bug tracking is easy at early stage in development environment. • Provides a very large numbers of plugin support. • Iterative improvement to the code, code is basically divided into small sprints. • Build failures are cached at integration stage. • For each code commit changes an automatic build report notification get generated. • To notify developers about build report success or failure, it can be integrated with LDAP mail server. • Achieves continuous integration agile development and test-driven development environment. • With simple steps, maven release project can also be automated.
I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs
In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.
How can I integrate all the tools with Jenkins?
I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.
How to install Jenkins via Docker?
The steps are: • Open up a terminal window. • Download the jenkinsci/blueocean image & run it as a container in Docker using the following docker run command:
• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean • Proceed to the Post-installation setup wizard • Accessing the Jenkins/Blue Ocean Docker container:
docker exec -it jenkins-blueocean bash • Accessing the Jenkins console log through Docker logs:
docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash
Bash – Shell scripting
Write a shell script to add two numbers
echo “Enter no 1” read a echo “Enter no 2” read b c= ‘expr $a + $b’ echo ” $a+ $b=$c”
How to get a file that consists of last 10 lines of the some other file?
Tail -10 filename >filename
How to check the exit status of the commands?
echo $?
How to get the information from file which consists of the word “GangBoard”?
grep “GangBoard” filename
How to search the files with the name of “GangBoard”?
find / -type f -name “*GangBoard*”
Write a shell script to print only prime numbers?
How to pass the parameters to the script and how can I get those parameters?
Scriptname.sh parameter1 parameter2 Use $* to get the parameters.
Monitoring – Refactoring
My application is not coming up for some reason? How can you bring it up?
We need to follow the steps • Network connection • The Web Server is not receiving users’s request • Checking the logs • Checking the process id’s whether services are running or not • The Application Server is not receiving user’s request(Check the Application Server Logs and Processes) • A network level ‘connection reset’ is happening somewhere.
What is multifactor authentication? What is the use of it?
Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.
• Security for every enterprise user — end & privileged users, internal and external • Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers, privilege elevation and more • Reduce cost & complexity with an integrated identity platform
I want to copy the artifacts from one location to another location in cloud. How?
Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.
• Agentless, it doesn’t require any extra package/daemons to be installed • Very low overhead • Good performance • Idempotent • Very Easy to learn • Declarative not procedural
What’s the use of Ansible?
Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.
What are the Pros and Cons of Ansible?
Pros: 1. Open Source 2. Agent less 3. Improved efficiency , reduce cost 4. Less Maintenance 5. Easy to understand yaml files Cons: 1. Underdeveloped GUI with limited features 2. Increased focus on orchestration over configuration manage
What is the difference among chef, puppet and ansible?
Chef
Puppet
Ansible
Interoperability
Works Only on Linux/Unix
Works Only on Linux/Unix
Supports
Windows
but server
should be
Linux/Unix
Configuration Laguage
Uses Ruby
Pupper DSL
YAML
(Python)
Availability
Primary Server
and
Backup Server
Multi
Master
Architecture
Single
Active
Node
How to access variable names in Ansible?
Using hostvars method we can access and add the variables like below
Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.
What is Docker image?
Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.
What is a Docker Container?
Docker Container is the running instance of Docker Image
How to stop and restart the Docker container?
To stop the container: docker stop container ID Now to restart the Docker container: docker restart container ID
What platforms does Docker run on?
Docker runs on only Linux and Cloud platforms: • Ubuntu 12.04 LTS+ • Fedora 20+ • RHEL 6.5+ • CentOS 6+ • Gentoo • ArchLinux • openSUSE 12.3+ • CRUX 3.0+
Cloud: • Amazon EC2 • Google Compute Engine • Microsoft Azure • Rackspace
Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows
What are the tools used for docker networking?
For docker networking we generally use kubernets and docker swarm.
What is docker compose?
Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.
How to deploy docker container to aws?
Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.
What is the fundamental disservice of Docker holders?
As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.
What are the docker motor and docker form?
Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP
What are the Different modes does a holder can be run?
Docker holder can be kept running in two modes Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen. Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.
What the yield of docker assess order will be?
Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized. docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices • Name, shorthand Default Description • group, – f Format the yield utilizing the given Go layout • measure, – s Display all out document sizes if the sort is the compartment • type Return JSON for a predefined type
What is docker swarm?
Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility. Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.
What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?
Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.
How to Version control Docker pictures?
Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.
What is difference between docker image and docker container?
Docker image is a readonly template that contains the instructions for a container to start. Docker container is a runnable instance of a docker image.
What is Application Containerization?
It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.
Commit object contain the following components: It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).
Explain the difference between git pull and git fetch?
Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository. Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below: Git pull = git fetch + git merge
How do we know in Git if a branch has already been merged into master?
git branch –merged The above command lists the branches that have been merged into the current branch. git branch –no-merged this command lists the branches that have not been merged
What is ‘Staging Area’ or ‘Index’ in GIT?
Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add
What is Git Stash?
Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash. Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.
What is Git stash drop?
Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument. I have provided an example below: If you want to remove any particular stash item from the list of stashed items you can use the below commands: git stash list: It will display the list of stashed items as follows: stash@{0}: WIP on master: 049d080 added the index file stash@{1}: WIP on master: c265351 Revert “added files” stash@{2}: WIP on master: 13d80a5 added number to log
What is the function of ‘git config’?
Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username. Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use: git config –global user.name “Your Name”: This command will add your username. git config –global user.email “Your E-mail Address”: This command will add your email id.
How can you create a repository in Git?
To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.
What language is used in Git?
Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.
What is SubGit?
SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.
How can you clone a Git repository via Jenkins?
First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.
What are the advantages of using Git?
1. Data redundancy and replication 2. High availability 3. Only one. git directory per repository 4. Superior disk utilization and network performance 5. Collaboration friendly 6. Git can use any sort of projects.
What is git add?
It adds the file changes to the staging area
What is git commit?
Commits the changes to the HEAD (staging area)
What is git push?
Sends the changes to the remote repository
What is git checkout?
Switch branch or restore working files
What is git branch?
Creates a branch
What is git fetch?
Fetch the latest history from the remote server and updates the local repo
What is git merge?
Joins two or more branches together
What is git pull?
Fetch from and integrate with another repository or a local branch (git fetch + git merge
What is git rebase?
Process of moving or combining a sequence of commits to a new base commit
What is git revert?
To revert a commit that has already been published and made public
What is git clone?
Clones the git repository and creates a working copy in the local machine
How can I modify the commit message in git?
I have to use following command and enter the required message. Git commit –amend
How you handle the merge conflicts in git
Follow the steps 1. Create Pull request 2. Modify according to the requirement by sitting with developers 3. Commit the correct file to the branch 4. Merge the current branch with master branch.
What is Git command to send the modifications to the master branch of your remote repository
Use the command “git push origin master”
NOSQL
What are the benefits of NoSQL database on RDBMS?
Benefits: 1. ETL is very low 2. Support for structured text is provided 3. Changes in periods are handled 4. Key Objectives Function. 5. The ability to measure horizontally 6. Many data structures are provided. 7. Vendors may be selected
Maven
What is Maven?
Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories
Numpy
What is Numpy
There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc.
Why is python numpy better than lists?
Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.
Describe the map function in Python?
The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.
How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy
###
import numpy as np
a=np.random.rand(100)
print(type(a))
print(a)
###
will create 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1.
How to count the occurrence of each value in a numpy array?
Use numpy.bincount() >>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9]) >>> numpy.bincount(arr) The argument to bincount() must consist of booleans or positive integers. Negative integers are invalid.
Ouput: [4 1 1 1 2 3 0 0 0 2]
Does Numpy Support Nan?
nan, short for “not a number”, is a special floating point value defined by the IEEE-754 specification. Python numpy supports nan but the definition of nan is more system dependent and some systems don’t have an all round support for it like older cray and vax computers.
What does ravel() function in numpy do?
It combines multiple numpy arrays into a single array
How to remove from one array those items that exist in another?
>> a = np.array([5, 4, 3, 2, 1]) >>> b = np.array([4, 8, 9, 10, 1]) # From ‘a’ remove all of ‘b’ >>> np.setdiff1d(a,b) # Output: >>> array([5, 3, 2])
How to reverse a numpy array in the most efficient way?
>>> import numpy as np >>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77]) >>> perc = np.percentile(arr, 40) #Returns the 40th percentile >>> print(perc)
Output: 37.400000000000006
What Is The Difference Between Numpy And Scipy?
NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms.
What Is The Preferred Way To Check For An Empty (zero Element) Array?
For a numpy array, use the size attribute. The size attribute is helpful for determining the length of numpy array: >>> arr = numpy.zeros((1,0)) >>> arr.size
What Is The Difference Between Matrices And Arrays?
Matrices can only be two-dimensional, whereas arrays can have any number of dimensions
How can you find the indices of an array where a condition is true?
Given an array a, the condition arr > 3 returns a boolean array and since False is interpreted as 0 in Python and NumPy. >>> import numpy as np >>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]]) >>> arr > 3 >>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)
How to find the maximum and minimum value of a given flattened array?
>>> import numpy as np >>> a = np.arange(4).reshape((2,2)) >>> max_val = np.amax(a) >>> min_val = np.amin(a)
Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis.
>>> import numpy as np >>> arr = np.arange(16).reshape((4, 7)) >>> res = np.ptp(arr, 1)
Find median of a numpy flattened array
>>> import numpy as np >>> arr = np.arange(16).reshape((4, 5)) >>> res = np.median(arr)
Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis
>>> import numpy as np >>> x = np.arange(16) >>> mean = np.mean(x) >>> std = np.std(x) >>> var= np.var(x
Calculate covariance matrix between two numpy arrays
>>> import numpy as np >>> x = np.array([2, 1, 0]) >>> y = np.array([2, 3, 3]) >>> cov_arr = np.cov(x, y)
Compute product-moment correlation coefficients of two given numpy arrays
>>> import numpy as np >>> x = np.array([0, 1, 3]) >>> y = np.array([2, 4, 5]) >>> cross_corr = np.corrcoef(x, y)
Develop a numpy program to compute the histogram of nums against the bins
>>> import numpy as np >>> x = np.arange(7) >>> np.power(x, 3)
Write a NumPy program to get true division of the element-wise array inputs
>>> import numpy as np >>> x = np.arange(10) >>> np.true_divide(x, 3)
Panda
What is a series in pandas?
A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.
What features make Pandas such a reliable option to store tabular data?
Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.
What is re-indexing in pandas?
Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.
How will you create a series from dict in Pandas?
A Series is defined as a one-dimensional array that is capable of storing various data types.
import pandas as pd info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.} a = pd.Series(info)
How can we create a copy of the series in Pandas?
Use pandas.Series.copy method import pandas as pd pd.Series.copy(deep=True)
What is groupby in Pandas?
GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and easy.
What is vectorization in Pandas?
Vectorization is the process of running operations on the entire array. This is done to reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.
Different types of Data Structures in Pandas
Pandas provide two data structures, which are supported by the pandas library, Series, and DataFrames. Both of these data structures are built on top of the NumPy.
What Is Time Series In pandas
A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.
How to convert pandas dataframe to numpy array?
The function to_numpy() is used to convert the DataFrame to a NumPy array. DataFrame.to_numpy(self, dtype=None, copy=False) The dtype parameter defines the data type to pass to the array and the copy ensures the returned value is not a view on another array.
Write a Pandas program to get the first 5 rows of a given DataFrame
Write a Pandas program to convert Series of lists to one Series
>>> import pandas as pd >>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]]) >>> s = s.apply(pd.Series).stack().reset_index(drop=True)
Write a Pandas program to create a subset of a given series based on value and condition
>>> import pandas as pd >>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10]) >>> n = 6 >>> new_s = s[s < n] >>> new_s
Develop a Pandas code to alter the order of index in a given series
>>> import pandas as pd >>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’]) >>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])
Write a Pandas code to get the items of a given series not present in another given series.
>> import pandas as pd >>> sr1 = pd.Series([1, 2, 3, 4, 5]) >>> sr2 = pd.Series([2, 4, 6, 8, 10]) >>> result = sr1[~sr1.isin(sr2)] >>> result
What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?
First one is a view of the original dataframe and second one is a copy of the original dataframe.
Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.
>> >import pandas as pd >>> import numpy as np >>> np.random.RandomState(100) >>> num_series = pd.Series(np.random.randint(1, 5, [15])) >>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’
Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.
>>> import pandas as pd >>> import numpy as np >>> num_series = pd.Series(np.random.randint(1, 10, 9)) >>> result = np.argwhere(num_series % 5==0)
How will you add a column to a pandas DataFrame?
# importing the pandas library >>> import pandas as pd >>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]), ‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])} >>> info = pd.DataFrame(info) # Add a new column to an existing DataFrame object >>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])
How to iterate over a Pandas DataFrame?
You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.
Python
What type of language is python? Programming or scripting?
Python is capable of scripting, but in general sense, it is considered as a general-purpose programming language.
Is python case sensitive?
Yes, python is a case sensitive language.
What is a lambda function in python?
An anonymous function is known as a lambda function. This function can have any number of parameters but can have just one statement.
What is the difference between xrange and xrange in python?
xrange and range are the exact same in terms of functionality.The only difference is that range returns a Python list object and x range returns an xrange object.
What are docstrings in python?
Docstrings are not actually comments, but they are documentation strings. These docstrings are within triple quotes. They are not assigned to any variable and therefore, at times, serve the purpose of comments as well.
Whenever Python exits, why isn’t all the memory deallocated?
Whenever Python exits, especially those Python modules which are having circular references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of memory that are reserved by the C library. On exit, because of having its own efficient clean up mechanism, Python would try to de-allocate/destroy every other object.
What does this mean: *args, **kwargs? And why would we use it?
We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.
What is the difference between deep and shallow copy?
Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance. Shallow copy is used to copy the reference pointers just like it copies the values. Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.
Define encapsulation in Python?
Encapsulation means binding the code and the data together. A Python class in a example of encapsulation.
Does python make use of access specifiers?
Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.
What are the generators in Python?
Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.
Write a Python script to Python to find palindrome of a sequence
How will you remove the duplicate elements from the given list?
The set is another type available in Python. It doesn’t allow copies and provides some good functions to perform set operations like union, difference etc. >>> list(set(a))
Does Python allow arguments Pass by Value or Pass by Reference?
Neither the arguments are Pass by Value nor does Python supports Pass by reference. Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.
What is slicing in Python?
Slicing in Python is a mechanism to select a range of items from Sequence types like strings, list, tuple, etc.
Why is the “pass” keyword used in Python?
The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.
What are decorators in Python?
Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion
What is the key difference between lists and tuples in python?
The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.
What is self in Python?
Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.
What is PYTHONPATH in Python?
PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.
What is the difference between .py and .pyc files?
.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.
What is namespace in Python?
In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.
What is pickling and unpickling?
Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.
How is Python interpreted?
Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.
Jupyter Notebook
What is the main use of a Jupyter notebook?
Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.
How do I increase the cell width of the Jupyter/ipython notebook in my browser?
>> from IPython.core.display import display, HTML >>> display(HTML(“<style>.container { width:100% !important; }</style>”))
How do I convert an IPython Notebook into a Python file via command line?
How to measure execution time in a jupyter notebook?
>> %%time is inbuilt magic command
How to run a jupyter notebook from the command line?
>> jupyter nbconvert –to python nb.ipynb
How to make inline plots larger in jupyter notebooks?
Use figure size. >>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)
How to display multiple images in a jupyter notebook?
>>for ima in images: >>>plt.figure() >>>plt.imshow(ima)
Why is the Jupyter notebook interactive code and data exploration friendly?
The ipywidgets package provides many common user interface controls for exploring code and data interactively.
What is the default formatting option in jupyter notebook?
Default formatting option is markdown
What are kernel wrappers in jupyter?
Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python. Wrapper kernels can implement optional methods, notably for code completion and code inspection.
What are the advantages of custom magic commands?
Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.
Is the jupyter architecture language dependent?
No. It is language independent
Which tools allow jupyter notebooks to easily convert to pdf and html?
Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.
What is a major disadvantage of a Jupyter notebook?
It is very hard to run long asynchronous tasks. Less Secure.
In which domain is the jupyter notebook widely used?
It is mainly used for data analysis and machine learning related tasks.
What are alternatives to jupyter notebook?
PyCharm interact, VS Code Python Interactive etc.
Where can you make configuration changes to the jupyter notebook?
In the config file located at ~/.ipython/profile_default/ipython_config.py
Which magic command is used to run python code from jupyter notebook?
%run can execute python code from .py files
How to pass variables across the notebooks in Jupyter?
The %store command lets you pass variables between two different notebooks. >>> data = ‘this is the string I want to pass to different notebook’ >>> %store data # Stored ‘data’ (str) # In new notebook >>> %store -r data >>> print(data)
Export the contents of a cell/Show the contents of an external script
Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.
What inbuilt tool we use for debugging python code in a jupyter notebook?
Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.
How to make high resolution plots in a jupyter notebook?
>> %config InlineBackend.figure_format =’retina’
How can one use latex in a jupyter notebook?
When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.
What is a jupyter lab?
It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.
What is the biggest limitation for a Jupyter notebook?
Code versioning, management and debugging is not scalable in current jupyter notebook
Cloud Computing
[appbox googleplay com.cloudeducation.free]
[appbox appstore id1560083470-iphone screenshots]
Which are the different layers that define cloud architecture?
Below mentioned are the different layers that are used by cloud architecture: ● Cluster Controller ● SC or Storage Controller ● NC or Node Controller ● CLC or Cloud Controller ● Walrus
Explain Cloud Service Models?
Infrastructure as a service (IaaS) Platform as a service (PaaS) Software as a service (SaaS) Desktop as a service (Daas)
What are Hybrid clouds?
Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture. The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.
Explain Platform as a Service (Paas)?
It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.
What is the difference in cloud computing and Mobile Cloud computing?
Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.
What are the security aspects provided with the cloud?
There are 3 types of Cloud Computing Security: ● Identity Management: It authorizes the application services. ● Access Control: The user needs permission so that they can control the access of another user who is entering into the cloud environment. ● Authentication and Authorization: Allows only the authorized and authenticated the user only to access the data and applications
What are system integrators in cloud computing?
System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly. A person or a company which specializes in system integration is called as a system integrator.
What is the usage of utility computing?
Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.
What are some large cloud providers and databases?
Following are the most used large cloud providers and databases: – Google BigTable – Amazon SimpleDB – Cloud-based SQL
Explain the difference between cloud and traditional data centers.
In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem. Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.
What is hypervisor in Cloud Computing?
It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor. Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.
Define what MultiCloud is?
Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.
What is a multi-cloud strategy?
The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider. They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.
What is meant by Edge Computing, and how is it related to the cloud?
Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.
What are disadvantages of SaaS cloud computing layer
1) Security Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment. 2) Latency issue Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds. 3) Total Dependency on Internet Without an internet connection, most SaaS applications are not usable. 4) Switching between SaaS vendors is difficult Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.
What is IaaS in Cloud Computing?
IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.
Explain what is the use of “EUCALYPTUS” in cloud computing?
EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations. When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model. Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.
Name the foremost refined and restrictive service model?
The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.
Name all the kind of virtualization that are also characteristics of cloud computing?
Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.
What Are Main Features Of Cloud Services?
Some important features of the cloud service are given as follows: • Accessing and managing the commercial software. • Centralizing the activities of management of software in the Web environment. • Developing applications that are capable of managing several clients. • Centralizing the updating feature of software that eliminates the need of downloading the upgrades
What Are The Advantages Of Cloud Services?
Some of the advantages of cloud service are given as follows: • Helps in the utilization of investment in the corporate sector; and therefore, is cost saving. • Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time. • Helps in saving time in terms of deployment and maintenance.
Mention The Basic Components Of A Server Computer In Cloud Computing?
The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard, Memory, Processor, Network connection, Hard drives, Video, Power supply etc.
What are the advantages of auto-scaling?
Following are the advantages of autoscaling ● Offers fault tolerance ● Better availability ● Better cost management
[appbox googleplay com.cloudeducation.free]
[appbox appstore id1560083470-iphone screenshots]
Azure Cloud
Which Services Are Provided By Window Azure Operating System?
Windows Azure provides three core services which are given as follows: • Compute • Storage • Management
Which service in Azure is used to manage resources in Azure?
Azure Resource Manager is used to “manage” infrastructures which involve a no. of azure services. It can be used to deploy, manage and delete all the resources together using a simple JSON script.
Which web applications can be deployed with Azure?
Microsoft also has released SDKs for both Java and Ruby to allow applications written in those languages to place calls to the Azure Service Platform API to the AppFabric Service.
What are Roles in Azure and why do we use them?
Roles are nothing servers in layman terms. These servers are managed, load balanced, Platform as a Service virtual machines that work together to achieve a common goal. There are 3 types of roles in Microsoft Azure: ● Web Role ● Worker Role ● VM Role Let’s discuss each of these roles in detail: ● Web Role – A web role is basically used to deploy a website, using languages supported by the IIS platform like, PHP, .NET etc. It is configured and customized to run web applications. ● Worker Role – A worker role is more like an help to the Web role, it used to execute background processes unlike the Web Role which is used to deploy the website. ● VM Role – The VM role is used by a user to schedule tasks and other windows services. This role can be used to customize the machines on which the web and worker role is running.
What is Azure as PaaS?
PaaS is a computing platform that includes an operating system, programming language execution environment, database, or web services. Developers and application providers use this type of Azure services.
What are Break-fix issues in Microsoft Azure?
In, Microsoft Azure, all the technical problem is called break-fix issues. This term is used when “work is involved” in support of a technology when it fails in the normal course of its function.
Explain Diagnostics in Windows Azure
Windows Azure Diagnostic offers the facility to store diagnostic data. In Azure, some diagnostics data is stored in the table, while some are stored in a blob. The diagnostic monitor runs in Windows Azure as well as in the computer’s emulator for collecting data for a role instance.
State the difference between repetitive and minimal monitoring.
Verbose monitoring collects metrics based on performance. It allows a close analysis of data fed during the process of application. On the other hand, minimal monitoring is a default configuration method. It makes the user of performance counters gathered from the operating system of the host.
What is the main difference between the repository and the powerhouse server?
The main difference between them is that repository servers are instead of the integrity, consistency, and uniformity while powerhouse server governs the integration of different aspects of the database repository.
Explain command task in Microsoft Azure
Command task is an operational window which set off the flow of either single or multiple common whiles when the system is running.
What is the difference between Azure Service Bus Queues and Storage Queues?
Two types of queue mechanisms are supported by Azure: Storage queues and Service Bus queues. Storage queues: These are the part of the Azure storage infrastructure, features a simple REST-based GET/PUT/PEEK interface. Provides persistent and reliable messaging within and between services. Service Bus queues: These are the part of a broader Azure messaging infrastructure that helps to queue as well as publish/subscribe, and more advanced integration patterns.
Explain Azure Service Fabric.
Azure Service Fabric is a distributed platform designed by Microsoft to facilitate the development, deployment and management of highly scalable and customizable applications. The applications created in this environment consists of detached microservices that communicate with each other through service application programming interfaces.
Define the Azure Redis Cache.
Azure Redis Cache is an open-source and in-memory Redis cache that helps web applications to fetch data from a backend data source into cache and server web pages from the cache to enhance the application performance. It provides a powerful and secure way to cache the application’s data in the Azure cloud.
How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what’s the benefit of Azure SLA?
TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.
What are the options to manage session state in Windows Azure?
● Windows Azure Caching ● SQL Azure ● Azure Table
What is cspack?
It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.
What is csrun?
It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.
How to design applications to handle connection failure in Windows Azure?
The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).
What is Windows Azure Diagnostics?
Windows Azure Diagnostics enables you to collect diagnostic data from an application running in Windows Azure. You can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing.
What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?
Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues. Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services. Service Bus Queues are part of a broader Windows Azure messaging infrastructure dead-letters queuing as well as publish/subscribe, Web service remoting, and integration patterns.
What is the use of Azure Active Directory?
Azure Active Directory is an identify and access management system. It is very much similar to the active directories. It allows you to grant your employee in accessing specific products and services within the network
Is it possible to create a Virtual Machine using Azure Resource Manager in a Virtual Network that was created using classic deployment?
This is not supported. You cannot use Azure Resource Manager to deploy a virtual machine into a virtual network that was created using classic deployment.
What are virtual machine scale sets in Azure?
Virtual machine scale sets are Azure compute resource that you can use to deploy and manage a set of identical VMs. With all the VMs configured the same, scale sets are designed to support true autoscale, and no pre-provisioning of VMs is required. So it’s easier to build large-scale services that target big compute, big data, and containerized workloads.
Are data disks supported within scale sets?
Yes. A scale set can define an attached data disk configuration that applies to all VMs in the set. Other options for storing data include: ● Azure files (SMB shared drives) ● OS drive ● Temp drive (local, not backed by Azure Storage) ● Azure data service (for example, Azure tables, Azure blobs) ● External data service (for example, remote database)
What is the difference between the Windows Azure Platform and Windows Azure?
The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and Microsoft’s cloud OS.
What are the three main components of the Windows Azure Platform?
Compute, Storage and AppFabric.
Can you move a resource from one group to another?
Yes, you can. A resource can be moved among resource groups.
How many resource groups a subscription can have?
A subscription can have up to 800 resource groups. Also, a resource group can have up to 800 resources of the same type and up to 15 tags.
Explain the fault domain.
This is one of the common Azure interview questions which should be answered that it is a logical working domain in which the underlying hardware is sharing a common power source and switch network. This means that when VMs is created the Azure distributes the VM across the fault domain that limits the potential impact of hardware failure, power interruption or outages of the network.
Differentiate between the repository and the powerhouse server?
Repository servers are those which are in lieu of the integrity, consistency, and uniformity whereas the powerhouse server governs the integration of different aspects of the database repository.
AWS Cloud
Explain what S3 is?
S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”
What is AMI?
AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.
Mention what the relationship between an instance and AMI is?
From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.
How many buckets can you create in AWS by default?
By default, you can create up to 100 buckets in each of your AWS accounts.
Explain can you vertically scale an Amazon instance? How?
Yes, you can vertically scale on Amazon instance. For that ● Spin up a new larger instance than the one you are currently running ● Pause that instance and detach the root webs volume from the server and discard ● Then stop your live instance and detach its root volume ● Note the unique device ID and attach that root volume to your new server ● And start it again
Explain what T2 instances is?
T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.
In VPC with private and public subnets, database servers should ideally be launched into which subnet?
With private and public subnets in VPC, database servers should ideally launch into private subnets.
Mention what the security best practices for Amazon EC2 are?
For secure Amazon EC2 best practices, follow the following steps ● Use AWS identity and access management to control access to your AWS resources ● Restrict access by allowing only trusted hosts or networks to access ports on your instance ● Review the rules in your security groups regularly ● Only open up permissions that you require ● Disable password-based login, for example, launched from your AMI
Is the property of broadcast or multicast supported by Amazon VPC?
No, currently Amazon VPI not provide support for broadcast or multicast.
How many Elastic IPs is allows you to create by AWS?
5 VPC Elastic IP addresses are allowed for each AWS account.
Explain default storage class in S3
The default storage class is a Standard frequently accessed.
What are the Roles in AWS?
Roles are used to provide permissions to entities which you can trust within your AWS account. Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.
What are the edge locations?
Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.
Explain snowball?
Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.
What is a redshift?
Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.
What is meant by subnet?
A large section of IP Address divided into chunks is known as subnets.
Can you establish a Peering connection to a VPC in a different region?
Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.
What is SQS?
Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.
How many subnets can you have per VPC?
You can have 200 subnets per VPC.
What is Amazon EMR?
EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.
What is boot time taken for the instance stored backed AMI?
The boot time for an Amazon instance store-backend AMI is less than 5 minutes.
Do you need an internet gateway to use peering connections?
Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.
How to connect an EBS volume to multiple instances?
We can’t be able to connect EBS volume to multiple instances. Although, you can connect various EBS Volumes to a single instance.
What are the different types of Load Balancer in AWS services?
Three types of Load balancer are: 1. Application Load Balancer 2. Classic Load Balancer 3. Network Load Balancer
In which situation you will select provisioned IOPS over standard RDS storage?
You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.
What are the important features of Amazon cloud search?
Important features of the Amazon cloud are: ● Boolean searches ● Prefix Searches ● Range searches ● Entire text search ● AutoComplete advice
What is AWS CDK?
AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. AWS CloudFormation enables you to: • Create and provision AWS infrastructure deployments predictably and repeatedly. • Take advantage of AWS offerings such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon SNS, Elastic Load Balancing, and AWS Auto Scaling. • Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure. • Use a template file to create and delete a collection of resources together as a single unit (a stack). The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net.
What are best practices for controlling acccess to AWS CodeCommit?
– Create your own policy – Provide temporary access credentials to access your repo * Typically done via a separate AWS account for IAM and separate accounts for dev/staging/prod * Federated access * Multi-factor authentication
What is AWS CodeCobuild?
AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.
1- Provide AWS CodeBuild with a build project. A build project file contains information about where to get the source code, the build environment, and how to build the code. The most important component is the BuildSpec file. 2- AWS CodeBuild creates the build environment. A build environment is a combination of OS, programming language runtime, and other tools needed to build. 3- AWS CodeBuild downloads the source code into the build environment and uses the BuildSpec file to run a build. This code can be from any source provider; for example, GitHub repository, Amazon S3 input bucket, Bitbucket repository, or AWS CodeCommit repository. 4- Build artifacts produced are uploaded into an Amazon S3 bucket. 5- he build environment sends a notification about the build status. 6- While the build is running, the build environment sends information to Amazon CloudWatch Logs.
What is AWS CodeDeploy?
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
You can use AWS CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.
With AWS CodeDeploy’s AppSpec file, you can specify commands to run at each phase of deployment, such as code retrieval and code testing. You can write these commands in any language, meaning that if you have an existing CI/CD pipeline, you can modify and sequence existing stages in an AppSpec file with minimal effort.
You can also integrate AWS CodeDeploy into your existing software delivery toolchain using the AWS CodeDeploy APIs. AWS CodeDeploy gives you the advantage of doing multiple code updates (in-place), enabling rapid deployment.
You can architect your CI/CD pipeline to enable scaling with AWS CodeDeploy. This plays an important role while deciding your blue/green deployment strategy.
AWS CodeDeploy deploys updates in revisions. So if there is an issue during deployment, you can easily roll back and deploy a previous revision
What is AWS CodeCommit?
AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools. AWS CodeCommit stores code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with local and remote teams to edit, compare, sync, and revise your code. Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source code control infrastructure. CodeCommit automatically encrypts your files and integrates with AWS Identity and Access Management (IAM), enabling you to assign user-specific permissions to your repositories. This ensures that your code remains secure, and you can collaborate on projects across your team in a secure manner.
What is AWS Opswork?
AWS OpsWorks is a configuration management tool that provides managed instances of Chef and Puppet.
Chef and Puppet enable you to use code to automate your configurations.
AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet, for infrastructure and application management. It maintains your Puppet primary server by automatically patching, updating, and backing up your server. AWS OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure and gives you access to all of the Puppet Enterprise features. It also works seamlessly with your existing Puppet code.
AWS OpsWorks for Chef Automate Offers a fully managed OpsWorks Chef Automate server. You can automate your workflow through a set of automation tools for continuous deployment and automated testing for compliance and security. It also provides a user interface that gives you visibility into your nodes and their status. You can automate software and operating system configurations, package installations, database setups, and more. The Chef server centrally stores your configuration tasks and provides them to each node in your compute environment at any scale, from a few nodes to thousands of nodes.
AWS OpsWorks Stacks: With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application servers. You can deploy and configure EC2 instances in each layer or connect other resources such as Amazon RDS databases. You run Chef recipes using Chef Solo, enabling you to automate tasks such as installing packages and languages or frameworks, and configuring software
Google Cloud Platform
What are the main advantages of using Google Cloud Platform?
Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer. Here are the main advantages of using Google Cloud Platform over others – ● GCP offers much better pricing deals as compared to the other cloud service providers ● Google Cloud servers allow you to work from anywhere to have access to your information and data. ● Considering hosting cloud services, GCP has an overall increased performance and service ● Google Cloud is very fast in providing updates about server and security in a better and more efficient manner ● The security level of Google Cloud Platform is exemplary; the cloud platform and networks are secured and encrypted with various security measures. If you are going for the Google Cloud interview, you should prepare yourself with enough knowledge of Google Cloud Platform.
Why should you opt to Google Cloud Hosting?
The reason for opting Google Cloud Hosting is the advantages it offers. Here are the advantages of choosing Google Cloud Hosting: ● Availability of better pricing plans ● Benefits of live migration of the machines ● Enhanced performance and execution ● Commitment to Constant development and expansion ● The private network provides efficiency and maximum time ● Strong control and security of the cloud platform ● Inbuilt redundant backups ensure data integrity and reliability
What are the libraries and tools for cloud storage on GCP?
At the core level, XML API and JSON API are there for the cloud storage on Google Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage. ● Google Cloud Platform Console, which performs basic operations on objects and buckets ● Cloud Storage Client Libraries, which provide programming support for various languages including Java, Ruby, and Python ● GustilCommand-line Tool, which provides a command line interface for the cloud storage
There are many third party libraries and tools such as Boto Library.
What do you know about Google Compute Engine?
Google Cloud Engine is the basic component of the Google Cloud Platform. Google Compute Engine is an IaaS product that offers self-managed and flexible virtual machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options. It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.
How are the Google Compute Engine and Google App Engine related?
Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google. Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the implementation of customized business logic or in case, you need to run your own storage system.
How does the pricing model work in GCP cloud?
While working on Google Cloud Platform, the user is charged on the basis of compute instance, network use, and storage by Google Compute Engine. Google Cloud charges virtual machines on the basis of per second with the limit of minimum of 1 minute. Then, the cost of storage is charged on the basis of the amount of data that you store. The cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances communicating with each other over the network.
What are the different methods for the authentication of Google Compute Engine API?
This is one of the popular Google Cloud architect interview questions which can be answered as follows. There are different methods for the authentication of Google Compute Engine API: – Using OAuth 2.0 – Through client library – Directly with an access token
List some Database services by GCP.
There are many Google cloud database services which helps many enterprises to manage their data. ● Bare Metal Solution is a relational database type and allow to migrate or lift and shift specialized workloads to Google cloud. ● Cloud SQL is a fully managed, reliable and integrated relational database services for MySQL, MS SQL Server and PostgreSQL known as Postgres. It reduce maintenance cost and ensure business continuity. ● Cloud Spanner ● Cloud Bigtable ● Firestore ● Firebase Realtime Database ● Memorystore ● Google Cloud Partner Services ● For more database products you can refer Google Cloud Databases ● For more data base solutions you can refer Google cloud Database solutions
What are the different Network services by GCP?
Google Cloud provides many Networking services and technologies that make easy to scale and manage your network. ● Hybrid connectivity helps to connect your infrastructure to Google Cloud ● Virtual Private Cloud (VPC) manage networking for your resources ● Cloud DNS is a highly available global domain naming system (DNS) network. ● Service Directory provides a service-centric network solution. ● Cloud Load Balancing ● Cloud CDN ● Cloud Armor ● Cloud NAT ● Network Telemetry ● VPC Service Controls ● Network Intelligence Center ● Network Service Tiers ● For more about Networking products refer Google Cloud Networking
List some Data Analytics service by GCP.
Google Cloud offers various Data Analytics services. ● BigQuery is an multi-cloud data warehouse for business agility that is high scalable, serverless, and cost effective. ● Looker ● DataProc is a service for running Apace Spark and Apace Hadoop Clusters. It makes open-source data and analytics processing easy, fast and more secure in Cloud. ● Dataflow ● Pub/Sub ● Cloud Data Fusion ● Data Catalog ● Cloud Composer ● Google Data Studio ● Dataprep ● Cloud Life Sciences enables life sciences community to manage, process and transform biomedical data at scale. ● Google Marketing Platform is a marketing platform that combines your advertising and analytics to help you make better marketing results, deeper insights and quality customer connections. It’s not an Google official cloud product, comes under separate terms of services. ● For Google Cloud analytics services visit Data Analytics
Explain Google BigQuery in Google Cloud Platform
For traditional data warehouse, hardware setup replacement is required. In such case, Google BigQuery serves to be the replacement. In addition, BigQuery helps in organizing the table data into unit called as datasets.
Explain Auto-scaling in Google cloud computing
Without human intervention, you can mechanically provision and initiate new instances in AWS. Depending on various metrics and load, Auto-scaling is triggered.
Describe Hypervisor in Google Cloud Platform
Hypervisor is otherwise called as VMM (Virtual Machine Monitor). Hypervisor is said to be a computer hardware/software used to create and run virtual machines (virtual machines is also called as Guest machine). Hypervisor is the one that runs on a host machine.
Define VPC in the Google cloud platform
VPC is Google cloud platform is helpful is providing connectivity from the premise and to any of the region without internet. VPC Connectivity is for computing App Engine Flex instances, Kubernetes Engine clusters, virtual machine instance and few other resources depending on the projects. Multiple VPC can also be used in numerous projects.
Technological freedom, which can lead to faster innovation
Microservices architectures don’t follow a “one size fits all” approach. Teams have the freedom to choose the best tool to solve their specific problems. As a consequence, teams building microservices can choose the best tool for each job.
Reusable code and short time to add new features
Dividing software into small, well-defined modules enables teams to use functions for multiple purposes. A service written for a certain function can be used as a building block for another feature. This allows an application to bootstrap off itself, as developers can create new capabilities without writing code from scratch.
Resilience
Service independence increases an application’s resistance to failure. In a monolithic architecture, if a single component fails, it can cause the entire application to fail. With microservices, applications handle total service failure by degrading functionality and not crashing the entire application.
How do you achieve low latency in microservices?
Don’t do a connection setup per RPC.
Cache things wherever possible.
Write asynchronous code wherever possible.
Exploit eventual consistency wherever possible. Otherwise known as, coordination is expensive so don’t do it unless you have to.
Route your requests sensibly.
Locate processing wherever will result in the best latency. That might mean you need more resources.
Use LIFO queues, they have better tail statistics than FIFO. Queue before load balancing, not after, that way a small fraction of slow requests are much less likely to stall all the processors. Source: Andrew mc Gregor
What operating system do most servers use in 2022?
Of the 1500 *NIX servers under my control (a very large fortune 500 company), 90% of them are Linux. We have a small amount of HP-UX and AIX left over running legacy applications, but they are being phased out. Most of the applications we used to run on HP-UX and AIX (SAP, Oracle, you-name-it) now run on Linux. And it’s not just my company, it’s everywhere.
In 2022, the most widely used server operating system is Linux. Source: Bill Thompson
How do you load multiple files in parallel from an Amazon S3 bucket?
By specifying a file prefix of the file names in the COPY command or specifying the list of files to load in a manifest file.
How can you manage the amount of provisioned throughput that is used when copying from an Amazon DynamoDB table?
Set the READRATIO parameter in the COPY command to a percentage of unused throughput.
What you must do to use client-side encryption with your own encryption keys when using COPY to load data files that were uploaded to Amazon S3?
You must add the master key value to the credentials string with the ENCRYPTED parameter in the COPY command.
DevOps and SysOps Breaking News – Top Stories – Jobs
Re: Security Camera Project Sending a follow-up note on this because I'm a pesky software sales rep and it's literally my job (I know, I hate me too). Fortunately there was no calendar invite with the unsolicited email. submitted by /u/jcwrks [link] [comments]
Hello, New to DevOps. Just started this role less than a month ago. I am being tasked currently with writing up terraform for the existing infrastructure that was created through the cloud provider WebGUI, and with that I’m being tasked with coming up with a naming convention for these instances since there isn’t really any consistency between them. I have to account for environment, and scale. So- I’m thinking most general -> least general, so these instances are grouped alphabetically by their env essentially. For example- dev-app-01 or something. Do you guys have any recommendations? Any tips or advice? submitted by /u/Fit_Parfait_9867 [link] [comments]
Hello all 🙂 I work as a devops/infrastructure/platform engineer and i need a laptop. I was given a Asus ProArt Studiobook with an i9, 3070 and 32G ram which is totally overkill. I obviously dont need the GPU, nor do i really need the i9 or the 32gigs of ram (even though its nice to have more). This laptop is very heavy, bulky and the battery life is horrendous. I get like 2hrs on windows and 1hr on linux if im on a zoom meeting. I rarely run anything heavy like VMs locally, mostly just do terraform/ansible and then work on remote servers. I'm looking for something with a good battery life & nice portability but also don't want to spend an obscene amount of money. In my previous workplaces i used to always use macbook pros but i cant currently spend 3-3.5+ eur on a laptop right now and thats what the M3s seem to go for. So my question is two-fold: does anyone use a macbook air for devops work? If so, what spec would be 'good enough' do you think? anyone know of any good non-mac laptop that i could run linux on and still get good battery life? Im looking for 8hrs+ without charge & <2k eur submitted by /u/icyu [link] [comments]
Hi devs! I've been thinking about a challenge that many of us encounter: efficiently managing and reusing code snippets across different projects, devices, and teams. Let's face it—we've all got that stash of go-to code snippets stashed away somewhere. Maybe they're scattered across different devices, buried in old projects, or lost somewhere in your downloads folder. Managing and reusing these snippets is a total headache. I'm working on a Code Snippet Manager that's gonna change the way we code. Think of it as your personal (or team) cloud-based code vault, packed with features to make your life easier and your workflow smoother. Key features: Seamless export/import of snippets and templates: Easily back up or migrate your entire library of code snippets. Link your GitHub account and import your Gists smoothly. Import snippets from Pastebin and other platforms. Language detection: Snippets get auto-tagged with the programming language, no manual tagging needed. Custom AI assistance will analyze your code and slaps on tags like "hero section", "utility", "scroll-to-top", "movement logic", "collectible class", you name it. Ability to manually organize snippets with tags, folders, and custom categories to find them quickly when needed. Supports all the popular and trendy programming languages with proper syntax highlighting for easy readability. Smart summaries: Let AI generate concise descriptions for your code blocks, so you know what's what at a glance. No more writing out comments, focus on coding while we handle the grunt work. Powerful search logic: Quickly locate code snippets using keywords, tags, or even searching within the code content. Sharing & collaboration with security: Share snippets securely with team members, control access levels, and collaborate in real-time. Integration with popular IDEs: Seamless plugins/extensions for VSCode, JetBrains IDEs (and more to come). Access and manage your snippets without ever leaving your code editor. Robust security features: End-to-end encryption for sensitive code, multi-factor authentication, and compliance with data protection regulations. Real-world use cases: Onboarding new devs: New team members can get up to speed faster by accessing a shared library of code snippets, coding standards, and best practices. Freelancers and consultants: Keep client-specific code organized and secure, accessible from anywhere. Cross-Device Access: Work on your desktop at the office and continue seamlessly on your laptop at home, with all your code snippets synchronized and readily available Secure code sharing: Share specific code snippets with clients or external collaborators without exposing your entire codebase, ensuring security and professionalism. Version Control for Snippets: Keep track of changes to your snippets, roll back to previous versions, and maintain a history of edits for accountability. Efficient Code Reviews: Team members can comment on and suggest improvements to shared snippets, enhancing collaboration and code quality. Enhanced Productivity: Reduce the time spent searching for or rewriting code by having a well-organized repository at your fingertips. Customization and Snippet Templates: Create and use snippet templates for recurring code structures, reducing boilerplate coding. Educational tool: Teachers and students can share code snippets, with live previews to make learning interactive and fun. Why this might be helpful: Spend less time searching through old projects or files for that one snippet you need. Share and sync snippets with your team in real-time. Keep everyone on the same page and reduce redundant code. Import/export features mean you're never locked in. Your code, your rules. Protect sensitive code with encryption and access controls, something not all existing solutions offer. Access your snippets without leaving your favorite IDE or editor. Handles the heavy lifting with auto-tagging and AI-generated descriptions. Questions for community: Would a tool like this be beneficial to you or your team? Which features appeal to you the most, and why? Are there any features you feel are missing that would make this tool more valuable? Do you currently use any code snippet managers? What do you like or dislike about them? How important is security when managing and sharing code snippets in your workflow? I'm gearing up to make this a reality and would love to get your thoughts. Some of these fancy/advanced features might be part of a pro tier (gotta pay the bills), but the aim is to deliver serious value that makes it totally worth it. TL;DR: Thinking of creating a secure, cloud-based code snippet manager with features like syntax highlighting, tagging, powerful search, collaboration tools, IDE integration, and strong security measures. Would you find this useful? Thanks in advance for your thoughts! submitted by /u/adictonator [link] [comments]
Jagshemash, DevOps neighbours! It is I, Boyan, greatest DevOps in all Kazakhstan! I come to you with important question. I want to show my skill to USA companies—yes, land of McDonald’s, Pamela Anderson, and big monies! But how can I make them say, “Wow wow wee waa! This Boyan, we must hire him immediately!”? What project can I make as DevOps engineer that is big and glorious? Something that will showcase all my big brain powers and make US and A recruiterka slide into my DM like smooth homemade rakiya. Here is what I know to do very nice: Make pipelines go fast, like rocket on cow’s milk. I best snake handler in village: mostly pythons. Automate things so I can rest and eat more cheese while servers run themselves. Kubernetes? Yes, I can do! Even my neighbor Nursultan say, “Boyan, you are kuber-whatever genius!” I also do monitoring, alerting, and can fix everything with only 3 lines of code—maximum! So what can I build? Maybe I make: Big project with CI/CD pipeline that deploy faster than gypsy stealing chicken? Or I make kubernetes cluster that self-heal like strong Kazakh man? Or maybe cloud infrastructure that so big and scalable, it can hold all of Kazakhstan’s goats? What will make hot recruiter lady say, “This Boyan, we need him on remote contracts, fast!”? Please help me, friends! I want to bring my glorious DevOps talent to America! Chenquieh! Boyan Balgaran, soon-to-be American DevOps superstar submitted by /u/baddoge9000 [link] [comments]
l Currently in 3rd sem been doing web dev for 7 months , I am not that good in web dev as of now but for long run I am thinking to do cloud and devops after web dev don't have any prior knowledge of ml so it would be totally new as for future what should be my goal to learn after web dev should it be cloud or ml I don't have a clear goal as to what to do I am just learning tech stacks and all and am bored doing web dev so thinking of switching to something else submitted by /u/_titan_276_ [link] [comments]
Hi, I have 3 nodes cluster on Windows Server 2016 with SQL Server 2016 installed with AlwaysOn. I would like to remove 1 nodes from the cluster. What is the correct way to perform it ? Is there a risk of downtime? Also is uninstalling SQL Server necessary? Steps: 1- Remove unwanted nodes from Always on Replicas. 2- Evict these nodes from Windows Failover Cluster. My questions are: 1 - How can I remove the power off server from the cluster? Is it possible to delete from GUI or with powershell? I will use Remove-ClusterNode -Name sql03 ? Do I have to use Clear-ClusterNode -Name sql03 command as well? 2 - What are the considerations about the quorum voting? currently quorom vote as follows , if I evict sql03 from cluster then will there be any downtime? current host server : sql01 file share witness :\\file01\sqlcls Also , SQL03 is powerOFF sql01 assigned vote : 1 current vote: 1 sql02 assigned vote : 1 current vote: 0 sql03 assigned vote : 1 current vote: 0 submitted by /u/maxcoder88 [link] [comments]
Yes I know it's old but it has approximately 3TB of data on it that needs to be checked prior to it's Valhalla. Have downloaded all supporting software/docs and wanted to set it up in Windows 11 however not able to detect the NAS on the network. Is there any easy way to acces the data on the storage module? Apologies if this has been asked already but couldnt find any information, guess because of its age. submitted by /u/Nono_Home [link] [comments]
I’m looking for advice on managing file storage for my remote design, video, and photography team. For the past decade, I’ve run my own creative agency, where we share everything via Dropbox—working files, RAW images, video files, the whole asset library. Our entire library is about 10TB. It lives on my studio RAID at home but is also uploaded to Dropbox, so each team member has a synced version on their desktop. Essentially, we have the same set of files in five different locations, and Dropbox keeps everything up to date as we update files. The initial sync was a hassle, but it’s worked great for the last five years. Now, I’ve taken an in-house role leading a creative studio, but the head of IT is insisting we only use 2TB of space and “clean up” our data. Cleaning up the data isn’t the solution, the risk of deleting data is high, and the solution has always been ‘rather than spending a week of my time cleaning data that I might actually need, how about I spend a few hundred dollars tripling my storage as my business grows’ The issue is that my team and I handle multiple projects daily, and we need to jump between them often. Raw assets are scattered across different parts of storage, so cleaning up isn’t really a solution since we never know what we’ll need to access. Is there a good alternative to Dropbox that would keep my IT department happy? Here’s what we need: 1. The team needs immediate access to all assets, which means enough space on their computers to sync files. 2. We need a cloud solution that can handle 5-10TB of data and be accessed by three people throughout the day, syncing to their local desktops. This is for a huge, $2b publicly listed company. How is it possible that my own agency had a more immediate solution to this? I feel I could have saved countless meetings by pulling out my credit card and getting our own Dropbox. Any recommendations? submitted by /u/serenitynow1990 [link] [comments]
I've been on a consulting project with a bank for the past three years, but now that it's wrapping up, I'll be on the bench. My work has primarily involved GCP migration from on-prem using GitHub Actions for CI/CD and Terraform Enterprise for IaC and deployments. After three years of sticking with the same tech stack and mostly writing YAML, I feel like I’ve lost my edge and need to refresh my skills. Any suggestions on areas, tools, or skills I should focus on to get back up to speed? TL;DR: Spent 3 years on GCP migration using GitHub Actions and Terraform. Project’s ending, and I feel rusty. What should I focus on to stay sharp in DevOps? submitted by /u/Beast-UltraJ [link] [comments]
There is a great deal of user-generated content out there, from scripts and software to tutorials and videos, but we've generally tried to keep that off of the front page due to the volume and as a result of community feedback. There's also a great deal of content out there that violates our advertising/promotion rule, from scripts and software to tutorials and videos. We have received a number of requests for exemptions to the rule, and rather than allowing the front page to get consumed, we thought we'd try a weekly thread that allows for that kind of content. We don't have a catchy name for it yet, so please let us know if you have any ideas! In this thread, feel free to show us your pet project, YouTube videos, blog posts, or whatever else you may have and share it with the community. Commercial advertisements, affiliate links, or links that appear to be monetization-grabs will still be removed. submitted by /u/AutoModerator [link] [comments]
Hi, I have an issue where a mac used to be able to rdp using a 192.168.##.### as host name and user credentials were their ADAZURE/Name and password to the device on their local network. Since 24H2 on the windows device it can no longer connect, rdp is on 3389 is open. NLA on the windows device has been unticked because this used to throw and error. Now when they try and connect they just get a plain 0x5 error? I've ensured the mac is up to date including the Microsoft rdp client on the mac. submitted by /u/Endeavour1988 [link] [comments]
We have built a self-hosted code review service, designed to be useful in the following scenarios: You have many repos but still want tight control over code quality Your repos are private, and commercial services seem overkill You want to continuously improve the process and rules, with full customization We are open-sourcing it and hope it will be helpful. https://github.com/qiniu/reviewbot Welcome feedback and suggestions. Thanks~ submitted by /u/Carl_Ji [link] [comments]
You can no longer use UptimeRobot to monitor services for free on commercial projects. I'm looking for an alternative with a generous free plan? I’ve been a loyal UptimeRobot user for around six years, and I really loved their service. However, their recent pricing/ToS changes and plan restructuring have made it a dealbreaker for me. Here is the screenshot from their newsletter https://x.com/AndrewDeJackson/status/1844665065267442142 submitted by /u/Then-Chest-8355 [link] [comments]
I have a few years of experience in DevOps now. I don't have a cloud cert under my belt just yet. Recently started working on getting AWS Solution Architect Associate cert. I did a take home architecture exercise to get my current job. It was interesting and made me think, perhaps I would enjoy architecture more than DevOps. Maybe, I'm a big picture person? DevOps is fine but I'm not sure I see myself doing this for the rest of my career. How can I transition to a Solution Architect role? How would I know if being a Solution Architect is right for me? Are there any Solution Architects out there that can tell me about their day-to-day? submitted by /u/feel__ [link] [comments]
I was just told today, by my supervisor that the executive team wants me gone. There have been problems with the executive team just telling me that they want certain things done (the most recent example was handing over our DNS zone file to a marketing firm), and I advised against it. Another example was a user not utilizing our software correctly and complaining that it wasn't working properly. She took that to her boss (the COO, and HR), where we had a meeting and I was blamed for not just doing what she wanted without questioning it. It seems that they wanted a "yes man" instead of someone with a brain. The problem with the way I tried to handle it was to be an open book with my direct supervisor, who used that information to tell the other executives that I was unhappy. Now they posted my job position and are looking for my replacement before I have found another job. I was going to school to try and finish my degree, I will have to withdraw from my classes as I can't find many companies willing to have someone go to school. I should have just kept my mouth shut and been miserable, then my job wouldn't be evaporating beneath my feet. To be clear I am applying to everything I can find that is even close to being relevant to my skill set hoping I don't financially ruin my family... at least they didn't tell me yesterday on my birthday. TLDR; Unless you have a good savings account pretend to be happy at work, otherwise you could loose your job before you have another lined up. submitted by /u/computergeekguy [link] [comments]
My company is coming to an inflection point. We are approaching $1B in revenue due to making some really cool products and winning some large dollar contracts to provide them. I say this, yet our IT department is 5 people. Each product team buys off the shelf crap without any knowledge of each other, slaps it together, and then at some point in the future when it breaks catastrophically, they call my team to un-fuck it. We have a ton of users, and a ton of people who wish to use the things we make (that are primarily focused around very high tech stuff) and yet.... Every time I try to pin down management on things like: 1, 3, 5 year plan for supporting programs Architecture of upcoming product lines, and how to tie them together Product support and O&M (especially user and developer support) Career advancement for my other four guys How to enforce standards across programs when it comes to providing solutions How to do budgeting and time so that each guy isn't 120 hours one week and 25 hours the next I get NOTHING. It's like it doesn't compute. We have an entire organization of high level engineers (elec, mech, RF, etc) with all these kind of things defined, but when it comes to the tech dudes (of which, let me say, we come from diverse backgrounds mostly due to my choosing to hire a well rounded team, and are paid well), we are considered super generalists. Must know everything about everything. No slip time. No learning time. No downtime. It's like working for a badly managed MSP but we're internal employees! To clarify, I am not a manager at all. I just don't know what to do. Some of the best people in the world work here, but it seems like my career field has fallen through the cracks, and the company doesn't see the value, or does and has chosen not to invest. I just see the incoming tsunami and I want to make building reinforcements before it hits. So, help? Thoughts? Signed -Drowning IT Lead submitted by /u/NighTborn3 [link] [comments]
I'm a (IT) consultant. Most of our company apps are web based or have supported linux version, and a good portion of our other engineers use macbooks. I did as well for several years before going back to windows. Looking for a change, didn't care much for my last macbook (keyboard was garbage). Considering loading Ubuntu or something on my X1. Why is this a bad idea? submitted by /u/Red_Pretense_1989 [link] [comments]
What practices and platforms does everyone put in place for the Mac users in your Windows server environment? We have about 200 users, where only 6 of whom have Apple computers. These users are in the Design / Marketing sort of teams, and are heavy in media files and Adobe Creative Cloud app usage. Shared file storage for them is on a Server 2019 file server. Common complaints we hear are: slower than expected file transfer or opening times miscellaneous issues with Adobe programs that when looking at those forums may or may have to do with the eternal struggle of Macs in Windows environments Sporadic instances of files not behaving as expected The intermittent nature of most of the complaints make it such a pain to deal with. I’ve heard mentions of Acronis File Connect, is that a commonly used solution? submitted by /u/SuperSuiza [link] [comments]
submitted by /u/TylerFortier_Photo [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.