DevOps Interviews Question and Answers and Scripts

DevOps Interviews Question and Answers and Scripts
Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Below are several dozens DevOps Interviews Question and Answers and Scripts to help you get into the top Corporations in the world including FAANGM (Facebook, Apple, Amazon, Netflix, Google and Microsoft).

Credit: Steve Nouri – Follow Steve Nouri for more AI and Data science posts:

Deployment

What is a Canary Deployment?

A canary deployment, or canary release, allows you to rollout your features to only a subset of users as an initial test to make sure nothing else in your system broke.
The initial steps for implementing canary deployment are:
1. create two clones of the production environment,
2. have a load balancer that initially sends all traffic to one version,
3. create new functionality in the other version.
When you deploy the new software version, you shift some percentage – say, 10% – of your user base to the new version while maintaining 90% of users on the old version. If that 10% reports no errors, you can roll it out to gradually more users, until the new version is being used by everyone. If the 10% has problems, though, you can roll it right back, and 90% of your users will have never even seen the problem.
Canary deployment benefits include zero downtime, easy rollout and quick rollback – plus the added safety from the gradual rollout process. It also has some drawbacks – the expense of maintaining multiple server instances, the difficult clone-or-don’t-clone database decision.

Typically, software development teams implement blue/green deployment when they’re sure the new version will work properly and want a simple, fast strategy to deploy it. Conversely, canary deployment is most useful when the development team isn’t as sure about the new version and they don’t mind a slower rollout if it means they’ll be able to catch the bugs.

What is a Blue Green Deployment?

Reference: Blue Green Deployment

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.
At any time, only one of the environments is live, with the live environment serving all production traffic.
For this example, Blue is currently live, and Green is idle.
As you prepare a new version of your model, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the model in Green, you switch the router, so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle.
This technique can eliminate downtime due to app deployment and reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.

How to a  software release?

There are some steps to follow.
• Create a check list
• Create a release branch
• Bump the version
• Merge release branch to master & tag it.
• Use a Pull request to merge the release merge
• Deploy master to Prod Environment
• Merge back into develop & delete release branch
• Change log generation
• Communicating with stack holders
• Grooming the issue tracker

How to automate the whole build and release process?

• Check out a set of source code files.
• Compile the code and report on progress along the way.
• Run automated unit tests against successful compiles.
• Create an installer.
• Publish the installer to a download site, and notify teams that the installer is available.
• Run the installer to create an installed executable.
• Run automated tests against the executable.
• Report the results of the tests.
• Launch a subordinate project to update standard libraries.
• Promote executables and other files to QA for further testing.
• Deploy finished releases to production environments, such as Web servers or CD
manufacturing.
The above process will be done by Jenkins by creating the jobs.

Did you ever participated in Prod Deployments? If yes what is the procedure?

• Preparation & Planning : What kind of system/technology was supposed to run on what kind of machine
• The specifications regarding the clustering of systems
• How all these stand-alone boxes were going to talk to each other in a foolproof manner
• Production setup should be documented to bits. It needs to be neat, foolproof, and understandable.
• It should have all a system configurations, IP addresses, system specifications, & installation instructions.
• It needs to be updated as & when any change is made to the production environment of the system

Devops Tools and Concepts

What is DevOps? Why do we need DevOps? Mention the key aspects or principle behind DevOps?

By the name DevOps, it’s very clear that it’s a collaboration of Development as well as Operations. But one should know that DevOps is not a tool, or software or framework, DevOps is a Combination of Tools which helps for the automation of the whole infrastructure.
DevOps is basically an implementation of Agile methodology on the Development side as well as Operations side.

We need DevOps to fulfil the need of delivering more and faster and better application to meet more and more demands of users, we need DevOps. DevOps helps deployment to happen really fast compared to any other traditional tools.

The key aspects or principles behind DevOps are:

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • Infrastructure as a Code
  • Continuous Integration
  • Continuous Deployment
  • Automation
  • Continuous Monitoring
  • Security

Popular tools for DevOps are:

  • Git
  • AWS (CodeCommit, CloudFormation, CodePipeline, CodeBuild, CodeDeploy, SAM)
  • Jenkins
  • Ansible
  • Puppet
  • Nagios
  • Docker
  • ELK (Elasticsearch, Logstash, Kibana)

Can we consider DevOps as Agile methodology?

Of Course, we can!! The only difference between agile methodology and DevOps is that, agile methodology is implemented only for development section and DevOps implements agility on both development as well as operations section.

What are some of the most popular DevOps tools?
Selenium
Puppet
Chef
Git
Jenkins
Ansible
Docker

What is the job Of HTTP REST API in DevOps?

As DevOps is absolutely centers around Automating your framework and gives changes over the pipeline to various stages like an every CI/CD pipeline will have stages like form, test, mental soundness test, UAT,
Deployment to Prod condition similarly as with each phase there are diverse devices is utilized and distinctive innovation stack is displayed and there should be an approach to incorporate with various instrument for finishing an arrangement toolchain, there comes a requirement for HTTP API , where each apparatus speaks with various devices utilizing API, and even client can likewise utilize SDK to interface with various devices like BOTOX for Python to contact AWS API’s for robotization dependent on occasions, these days its not cluster handling any longer , it is generally occasion driven pipelines.

What is Scrum?

Scrum is basically used to divide your complex software and product development task into smaller chunks, using iterations and incremental practices. Each iteration is of two weeks. Scrum consists of three roles: Product owner, scrum master and Team

What are Micro services, and how they control proficient DevOps rehearses?

Where In conventional engineering , each application is stone monument application implies that anything is created by a gathering of designers, where it has been sent as a solitary application in numerous machines and presented to external world utilizing load balances, where the micro services implies separating your application into little pieces, where each piece serves the distinctive capacities expected to finish a solitary exchange and by separating , designers can likewise be shaped to gatherings and each bit of utilization may pursue diverse rules for proficient advancement stage, as a result of spry
improvement ought to be staged up a bit and each administration utilizes REST API (or) Message lines to convey between another administration.
So manufacture and arrival of a non-strong form may not influence entire design, rather, some usefulness is lost, that gives the confirmation to productive and quicker CI/CD pipelines and DevOps Practices.

What is Continuous Delivery?

Continuous Delivery means an extension of Constant Integration which primarily serves to make the features which some developers continue developing out on some end users because soon as possible.
During this process, it passes through several stages of QA, Staging etc., and before for delivery to the PRODUCTION system.

What is Puppet?

Puppet is a Configuration Management tool, Puppet is used to automate administration tasks.

What is Configuration Management?

Configuration Management is the System engineering process. Configuration Management applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes recording their status and in support of Change Management.

Software Configuration Management Features are:

• Enforcement
• Cooperating Enablement
• Version Control Friendly
• Enable Change Control Processes

What are the Some Of the Most Popular Devops Tools ?

• Selenium
• Puppet
• Chef
• Git
• Jenkins
• Ansible

What Are the Vagrant And Its Uses?

Vagrant used to virtual box as the hypervisor for virtual environments and in current scenario it is also supporting the KVM. Kernel-based Virtual Machine.
Vagrant is a tool that can created and managed environments for the testing and developing software.

What’s a PTR in DNS?

Pointer (PTR) record to used for the revers DNS (Domain Name System) lookup.

What testing is necessary to insure a new service is ready for production?

Continuous testing

What is Continuous Testing?

It is the process of executing on tests as part of the software delivery pipelines to obtain can immediate for feedback is the business of the risks associated with in the latest build.

What are the key elements of continuous testing?

Risk assessments, policy analysis, requirements traceabilities, advanced analysis, test optimization, and service virtualizations.

How does HTTP work?

The HTTP protocol  works in a client and server model like most other protocols. A web browser from which a request is initiated is called as a client and a web servers software that  respond to that request is called a server. World Wide Web Consortium of the Internet Engineering Task Force are two important spokes are the standardization of the HTTP protocol.

What is IaC? How you will achieve this?

Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code. This will be achieved by using the tools such as Chef, Puppet and Ansible, CloudFormation, etc.

What are patterns and anti-patterns of software delivery and deployment?

What are patterns and anti-patterns of software delivery and deployment?

Version Control

What is a version control system?

Version Control System (VCS) is a software that helps software developers to work together and maintain
  complete history of their work.
Some of the feature of VCS as follows:
• Allow developers to wok simultaneously
• Does not allow overwriting on each other changes.
• Maintain the history of every version.
There are two types of Version Control Systems:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN

What is Git and explain the difference between Git and SVN?

Git is a source code management (SCM) tool which handles small as well as large projects with efficiency.
It is basically used to store our repositories in remote server such as GitHub.

 

GIT SVN
Git is a Decentralized Version Control Tool SVN is a Centralized Version Control Tool
Git contains the local
repo as well as the full
history of the whole
project on all the developers hard drive,
so if there is a server
outage , you can easily
do recovery from your
team mates local git
repo.
SVN relies only on the
central server to store all the versions of the
project file
Push and pull
operations are fast
Push and pull
operations are slower
compared to Git

It belongs to
3rd generation Version
Control Tool

It belongs to
2nd generation Version
Control tools
Client nodes can share
the entire repositories
on their local system
Version history is
stored on server-side
repository
Commits can be done
offline too
Commits can be done
only online
Work are shared
automatically by
commit
Nothing is shared
automatically

Describe branching strategies?

Feature branching
This model keeps all the changes for a feature inside of a branch. When the feature branch is fully tested and validated by automated tests, the branch is then merged into master.

Task branching
In this task branching model each task is implemented on its own branch with the task key included in the branch name. It is quite easy to see which code implements which task, just look for the task key in the branch name.

Release branching
Once the develop branch has acquired enough features for a release, then we can clone that branch to form a Release branch. Creating this release branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it’s ready to ship, the release gets merged into master and then tagged with a version number. In addition, it should be merged back into develop branch, which may have
progressed since the release was initiated earlier.

ment
3. SSH communication slows down in scaled environments

Linux

What is the default file permissions for the file and how can I modify it?

Default file permissions are : rw-r—r—
If I want to change the default file permissions I need to use umask command ex: umask 666

What is a  kernel?

A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer.

What is difference between grep -i and grep -v?

i ignore alphabet difference v accept this value
Example:  ls | grep -i docker
Dockerfile
docker.tar.gz
ls | grep -v docker
Desktop
Dockerfile
Documents
Downloads
You can’t see anything with name docker.tar.gz

How can you define particular space to the file?

This feature is generally used to give the swap space to the server. Lets say in below machine I have to create swap space of 1GB then,
dd if=/dev/zero of=/swapfile1 bs=1G count=1

What is concept of sudo in linux?

Sudo(superuser do) is a utility for UNIX- and Linux-based systems that provides an efficient way to give specific users permission to use specific system commands at the root (most powerful) level of the system.

What are the checks to be done when a Linux build server become suddenly slow?

Perform a check on the following items:
1. System Level Troubleshooting: You need to make checks on various factors like application server log file, WebLogic logs, Web Server Log, Application Log file, HTTP to find if there are any issues in server receive or response time for deliberateness. Check for any memory leakage of applications.
2. Application Level Troubleshooting: Perform a check on Disk space, RAM and I/O read-write issues.
3. Dependent Services Troubleshooting: Check if there is any issues on Network, Antivirus, Firewall, and SMTP server response time

Jenkins

What is Jenkins?

Jenkins is an open source continuous integration tool which is written in Java language. It keeps a track on version control system and to initiate and monitor a build system if any changes occur. It monitors the whole process and provides reports and notifications to alert the concern team

What is the difference between Maven, Ant and Jenkins?

Maven and Ant are Build Technologies whereas Jenkins is a continuous integration(CI/CD) tool

What is continuous integration?

When multiple developers or teams are working on different segments of same web application, we need to perform integration test by integrating all the modules. To do that an automated process for each piece of code is performed on daily bases so that all your code gets tested. And this whole process is termed as continuous integration.

What are the advantages of Jenkins?

• Bug tracking is easy at early stage in development environment.
• Provides a very large numbers of plugin support.
• Iterative improvement to the code, code is basically divided into small sprints.
• Build failures are cached at integration stage.
• For each code commit changes an automatic build report notification get generated.
• To notify developers about build report success or failure, it can be integrated with LDAP mail server.
• Achieves continuous integration agile development and test-driven development environment.
• With simple steps, maven release project can also be automated.

Which SCM tools does Jenkins supports?

Source code management tools supported by Jenkins are below:
• AccuRev
• CVS
• Subversion
• Git
• Mercurial
• Perforce
• Clearcase
• RTC

 

I have 50 jobs in the Jenkins dash board , I want to build at a time all the jobs

In Jenkins there is a plugin called build after other projects build. We can provide job names over there and If one parent job run then it will automatically run the all other jobs. Or we can use Pipe line jobs.

How can I integrate all the tools with Jenkins?

I have to navigate to the manage Jenkins and then global tool configurations there you have to provide all the details such as Git URL , Java version, Maven version , Path etc.

How to install Jenkins via Docker?

The steps are:
• Open up a terminal window.
• Download the jenkinsci/blueocean image & run it as a container in Docker using the
following docker run command:

• docker run \ -u root \ –rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkinsdata:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
• Proceed to the Post-installation setup wizard 
• Accessing the Jenkins/Blue Ocean Docker container:

docker exec -it jenkins-blueocean bash
• Accessing the Jenkins console log through Docker logs:

docker logs <docker-containername>Accessing the Jenkins home directorydocker exec -it <docker-container-name> bash

Bash – Shell scripting

Write a shell script to add two numbers

echo “Enter no 1”
read a
echo “Enter no 2”
read b
c= ‘expr $a + $b’
echo ” $a+ $b=$c”

How to get a file that consists of last 10 lines of the some other file?

Tail -10 filename >filename

How to check the exit status of the commands?

echo $?

How to get the information from file which consists of the word “GangBoard”?

grep “GangBoard” filename

How to search the files with the name of “GangBoard”?

find / -type f -name “*GangBoard*”

Write a shell script to print only prime numbers?

DevOps script to print prime numbers

How to pass the parameters to the script and how can I get those parameters?

Scriptname.sh parameter1 parameter2
Use  $* to get the parameters.

 

 

Monitoring – Refactoring

My application is not coming up for some reason? How can you bring it up?

We need to follow the steps
• Network connection
• The Web Server is not receiving users’s request
• Checking the logs
• Checking the process id’s whether services are running or not
• The Application Server is not receiving user’s request(Check the Application Server Logs and Processes)
• A network level ‘connection reset’ is happening somewhere.

What is multifactor authentication? What is the use of it?

Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transaction.

• Security for every enterprise user — end & privileged users, internal and external
• Protect across enterprise resources — cloud & on-prem apps, VPNs, endpoints, servers,
privilege elevation and more
• Reduce cost & complexity with an integrated identity platform

I want to copy the artifacts from one location to another location in cloud. How?

Create two S3 buckets, one to use as the source, and the other to use as the destination and then create policies.

How to  delete 10 days older log files?

find -mtime +10 -name “*.log” -exec rm -f {} \; 2>/dev/null

Ansible

What are the Advantages of Ansible?

• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Good performance
• Idempotent
• Very Easy to learn
• Declarative not procedural

What’s the use of Ansible?

Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. Let’s say we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.

What are the Pros and Cons of Ansible?

Pros:
1. Open Source
2. Agent less
3. Improved efficiency , reduce cost
4. Less Maintenance
5. Easy to understand yaml files
Cons:
1. Underdeveloped GUI with limited features
2. Increased focus on orchestration over configuration manage

What is the difference among chef, puppet and ansible?

Ansible

Supports
Windows
but server
should be
Linux/Unix

YAML
(Python)

Single
Active
Node

Chef Puppet
Interoperability Works Only on Linux/Unix Works Only on Linux/Unix
Configuration Laguage Uses Ruby Pupper DSL
Availability Primary Server
and
Backup Server

Multi
Master
Architecture

How to access variable names in Ansible?

Using hostvars method we can access and add the variables like below

{{ hostvars[inventory_hostname][‘ansible_’ + which_interface][‘ipv4’][‘address’] }}

Docker

What is Docker?

Docker is a containerization technology that packages your application and all its dependencies together in the form of Containers to ensure that your application works seamlessly in any environment.

What is Docker image?

Docker image is the source of Docker container. Or in other words, Docker images are used to create containers.

What is a Docker Container?

Docker Container is the running instance of Docker Image

How to stop and restart the Docker container?

To stop the container: docker stop container ID
Now to restart the Docker container: docker restart container ID

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms:
• Ubuntu 12.04 LTS+
• Fedora 20+
• RHEL 6.5+
• CentOS 6+
• Gentoo
• ArchLinux
• openSUSE 12.3+
• CRUX 3.0+

Cloud:
• Amazon EC2
• Google Compute Engine
• Microsoft Azure
• Rackspace

Note that Docker does not run on Windows or Mac for production as there is no support, yes you can use it for testing purpose even in windows

What are the tools used for docker networking?

For docker networking we generally use kubernets and docker swarm.

What is docker compose?

Lets say you want to run multiple docker container, at that time you have to create the docker compose file and type the command docker-compose up. It will run all the containers mentioned in docker compose file.

How to deploy docker container to aws?

Amazon provides the service called Amazon Elastic Container Service; By using this creating and configuring the task definition and services we will launch the applications.

 

What is the fundamental disservice of Docker holders?

As the lifetime of any compartments is while pursuing a holder is wrecked you can’t recover any information inside a compartment, the information inside a compartment is lost perpetually, however tenacious capacity for information inside compartments should be possible utilizing volumes mount to an outer source like host machine and any NFS drivers.

What are the docker motor and docker form?

Docker motor contacts the docker daemon inside the machine and makes the runtime condition and procedure for any compartment, docker make connects a few holders to shape as a stack utilized in making application stacks like LAMP, WAMP, XAMP

What are the Different modes does a holder can be run?

Docker holder can be kept running in two modes
Connected: Where it will be kept running in the forefront of the framework you are running, gives a terminal inside to compartment when – t choice is utilized with it, where each log will be diverted to stdout screen.
Isolates: This mode is typically kept running underway, where the holder is confined as a foundation procedure and each yield inside a compartment will be diverted log records
inside/var/lib/docker/logs/<container-id>/<container-id.json> and which can be seen by docker logs order.

What the yield of docker assess order will be?

Docker examines <container-id> will give yield in JSON position, which contains subtleties like the IP address of the compartment inside the docker virtual scaffold and volume mount data and each other data identified with host (or) holder explicit like the basic document driver utilized, log driver utilized.
docker investigate [OPTIONS] NAME|ID [NAME|ID…] Choices
• Name, shorthand Default Description
• group, – f Format the yield utilizing the given Go layout
• measure, – s Display all out document sizes if the sort is the compartment
• type Return JSON for a predefined type

What is docker swarm?

Gathering of Virtual machines with Docker Engine can be grouped and kept up as a solitary framework and the assets likewise being shared by the compartments and docker swarm ace calendars the docker holder in any of the machines under the bunch as indicated by asset accessibility.
Docker swarm init can be utilized to start docker swarm bunch and docker swarm joins with the ace IP from customer joins the hub into the swarm group.

What are Docker volumes and what sort of volume ought to be utilized to accomplish relentless capacity?

Docker volumes are the filesystem mount focuses made by client for a compartment or a volume can be utilized by numerous holders, and there are distinctive sorts of volume mount accessible void dir, Post mount, AWS upheld lbs volume, Azure volume, Google Cloud (or) even NFS, CIFS filesystems, so a volume ought to be mounted to any of the outer drives to accomplish determined capacity, in light of the fact that a lifetime of records inside compartment, is as yet the holder is available and if holder is erased, the information would be lost.

How to Version control Docker pictures?

Docker pictures can be form controlled utilizing Tags, where you can relegate the tag to any picture utilizing docker tag <image-id> order. Furthermore, on the off chance that you are pushing any docker center library without labeling the default label would be doled out which is most recent, regardless of whether a picture with the most recent is available, it indicates that picture without the tag and reassign that to the most recent push picture.

 

 

What is difference between docker image and docker container?

Docker image is a readonly template that contains the instructions for a container to start.
Docker container is a runnable instance of a docker image.

What is Application Containerization?

It is a process of OS Level virtualization technique used to deploy the application without launching the entire VM for each application where multiple isolated applications or services can access the same Host and run on the same OS.

What is the syntax for building docker image?

docker build –f -t imagename:version

What is the running docker image?

docker run –dt –restart=always –p <hostport>:<containerport> -h <hostname> -v
<hostvolume>:<containervolume> imagename:version

How to log into a container?

docker exec –it /bin/bash

Git

What does the commit object contain?

Commit object contain the following components:
It contains a set of files, representing the state of a project at a given point of time reference to parent commit objects
An SHAI name, a 40-character string that uniquely identifies the commit object (also called as hash).

Explain the difference between git pull and git fetch?

Git pull command basically pulls any new changes or commits from a branch from your central repository and updates your target branch in your local repository.
Git fetch is also used for the same purpose, but its slightly different form Git pull. When you trigger a git fetch, it pulls all new commits from the desired branch and stores it in a new branch in your local repository. If we want to reflect these changes in your target branch, git fetch must be followed with a git merge. Our target branch will only be updated after merging the target branch and fetched branch. Just to make it easy for us, remember the equation below:
Git pull = git fetch + git merge

How do we know in Git if a branch has already been merged into master?

git branch –merged
The above command lists the branches that have been merged into the current branch.
git branch –no-merged
this command lists the branches that have not been merged

What is ‘Staging Area’ or ‘Index’ in GIT?

Before committing a file, it must be formatted and reviewed in an intermediate area known as ‘Staging Area’ or ‘Indexing Area’. #git add

What is Git Stash?

Let’s say you’ve been working on part of your project, things are in a messy state and you want to switch branches for some time to work on something else. The problem is, you don’t want to do a commit of your half-done work just, so you can get back to this point later. The answer to this issue is Git stash.
Git Stashing takes your working directory that is, your modified tracked files and staged changes and saves it on a stack of unfinished changes that you can reapply at any time.

What is Git stash drop?

Git ‘stash drop’ command is basically used to remove the stashed item. It will basically remove the last added stash item by default, and it can also remove a specific item if you include it as an argument.
I have provided an example below:
If you want to remove any particular stash item from the list of stashed items you can use the below commands:
git stash list: It will display the list of stashed items as follows:
stash@{0}: WIP on master: 049d080 added the index file
stash@{1}: WIP on master: c265351 Revert “added files”
stash@{2}: WIP on master: 13d80a5 added number to log

What is the function of ‘git config’?

Git uses our username to associate commits with an identity. The git config command can be used to change our Git configuration, including your username.
Suppose you want to give a username and email id to associate commit with an identity so that you can know who has made a commit. For that I will use:
git config –global user.name “Your Name”: This command will add your username.
git config –global user.email “Your E-mail Address”: This command will add your email id.

How can you create a repository in Git?

To create a repository, you must create a directory for the project if it does not exist, then run command “git init”. By running this command .git directory will be created inside the project directory.

What language is used in Git?

Git is written in C language, and since its written in C language its very fast and reduces the overhead of runtimes.

What is SubGit?

SubGit is a tool for migrating SVN to Git. It creates a writable Git mirror of a local or remote Subversion repository and uses both Subversion and Git if you like.

How can you clone a Git repository via Jenkins?

First, we must enter the e-mail and user name for your Jenkins system, then switch into your job directory and execute the “git config” command.

What are the advantages of using Git?

1. Data redundancy and replication
2. High availability
3. Only one. git directory per repository
4. Superior disk utilization and network performance
5. Collaboration friendly
6. Git can use any sort of projects.

What is git add?

It adds the file changes to the staging area

What is git commit? 

Commits the changes to the HEAD (staging area)

What is git push?

Sends the changes to the remote repository

What is git checkout?

Switch branch or restore working files

What is git branch?

Creates a branch

What is git fetch?

Fetch the latest history from the remote server and updates the local repo

What is git merge?

Joins two or more branches together

What is git pull?

Fetch from and integrate with another repository or a local branch (git fetch + git merge

What is git rebase?

Process of moving or combining a sequence of commits to a new base commit

What is git revert?

To revert a commit that has already been published and made public

What is git clone?

Clones the git repository and creates a working copy in the local machine

How can I modify the commit message in git?

I have to use following command and enter the required message.
Git commit –amend

How you handle the merge conflicts in git

Follow the steps
1. Create Pull request
2. Modify according to the requirement by sitting with developers
3. Commit the correct file to the branch
4. Merge the current branch with master branch.

What is Git command to send the modifications to the master branch of your remote repository

Use the command “git push origin master”

NOSQL

What are the benefits of NoSQL database on RDBMS?

Benefits:
1. ETL is very low
2. Support for structured text is provided
3. Changes in periods are handled
4. Key Objectives Function.
5. The ability to measure horizontally
6. Many data structures are provided.
7. Vendors may be selected

Maven

What is Maven?

Maven is a DevOps tool used for building Java applications which helps the developer with the entire process of a software project. Using Maven, you can compile the course code, perform functionals and unit testing, and upload packages to remote repositories

Numpy

What is Numpy

There are many packages in Python and NumPy- Numerical Python is one among them. This is useful for scientific computing containing powerful n-dimensional array object. We can get tools from NumPy to integrate C, C++ and so on. Numpy is a package library for Python, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high level mathematical functions. In simple words, Numpy is an optimized version of Python lists like Financial functions, Linear Algebra, Statistics, Polynomials, Sorting and Searching etc. 

Why is python numpy better than lists?

Python numpy arrays should be considered instead of a list because they are fast, consume less memory and convenient with lots of functionality.

Describe the map function in Python?

The Map function executes the function given as the first argument on all the elements of the iterable given as the second argument.

How to generate an array of ‘100’ random numbers sampled from a standard normal distribution using Numpy

###

import numpy as np
a=np.random.rand(100)
print(type(a))
print(a)
###
 

will create 100 random numbers generated from standard normal
distribution with mean 0 and standard deviation 1.

python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1
python numpy: 100 random numbers generated from standard normal distribution with mean 0 and standard deviation 1

How to count the occurrence of each value in a numpy array?

Use numpy.bincount()
>>> arr = numpy.array([0, 5, 5, 0, 2, 4, 3, 0, 0, 5, 4, 1, 9, 9])
>>> numpy.bincount(arr)
The argument to bincount() must consist of booleans or positive integers. Negative
integers are invalid.

Ouput: [4 1 1 1 2 3 0 0 0 2]

Does Numpy Support Nan?

nan, short for “not a number”, is a special floating point value defined by the IEEE-754
specification. Python numpy supports nan but the definition of nan is more system
dependent and some systems don’t have an all round support for it like older cray and vax
computers.

What does ravel() function in numpy do? 

It combines multiple numpy arrays into a single array

How to remove from one array those items that exist in another? 

>> a = np.array([5, 4, 3, 2, 1])
>>> b = np.array([4, 8, 9, 10, 1])
# From ‘a’ remove all of ‘b’
>>> np.setdiff1d(a,b)
# Output:
>>> array([5, 3, 2])

How to reverse a numpy array in the most efficient way?

>>> import numpy as np
>>> arr = np.array([9, 10, 1, 2, 0])
>>> reverse_arr = arr[::-1]

How to calculate percentiles when using numpy?

>>> import numpy as np
>>> arr = np.array([11, 22, 33, 44 ,55 ,66, 77])
>>> perc = np.percentile(arr, 40) #Returns the 40th percentile
>>> print(perc)

Output:  37.400000000000006

What Is The Difference Between Numpy And Scipy?

NumPy would contain nothing but the array data type and the most basic operations:
indexing, sorting, reshaping, basic element wise functions, et cetera. All numerical code
would reside in SciPy. SciPy contains more fully-featured versions of the linear algebra
modules, as well as many other numerical algorithms.

What Is The Preferred Way To Check For An Empty (zero Element) Array?

For a numpy array, use the size attribute. The size attribute is helpful for determining the
length of numpy array:
>>> arr = numpy.zeros((1,0))
>>> arr.size

What Is The Difference Between Matrices And Arrays?

Matrices can only be two-dimensional, whereas arrays can have any number of
 dimensions

How can you find the indices of an array where a condition is true?

Given an array a, the condition arr > 3 returns a boolean array and since False is
interpreted as 0 in Python and NumPy.
>>> import numpy as np
>>> arr = np.array([[9,8,7],[6,5,4],[3,2,1]])
>>> arr > 3
>>> array([[True, True, True], [ True, True, True], [False, False, False]], dtype=bool)

How to find the maximum and minimum value of a given flattened array?

>>> import numpy as np
>>> a = np.arange(4).reshape((2,2))
>>> max_val = np.amax(a)
>>> min_val = np.amin(a)

Write a NumPy program to calculate the difference between the maximum and the minimum values of a given array along the second axis. 

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 7))
>>> res = np.ptp(arr, 1)

Find median of a numpy flattened array

>>> import numpy as np
>>> arr = np.arange(16).reshape((4, 5))
>>> res = np.median(arr)

Write a NumPy program to compute the mean, standard deviation, and variance of a given array along the second axis

>>> import numpy as np
>>> x = np.arange(16)
>>> mean = np.mean(x)
>>> std = np.std(x)
>>> var= np.var(x

Calculate covariance matrix between two numpy arrays

>>> import numpy as np
>>> x = np.array([2, 1, 0])
>>> y = np.array([2, 3, 3])
>>> cov_arr = np.cov(x, y)

Compute  product-moment correlation coefficients of two given numpy arrays

>>> import numpy as np
>>> x = np.array([0, 1, 3])
>>> y = np.array([2, 4, 5])
>>> cross_corr = np.corrcoef(x, y)

Develop a numpy program to compute the histogram of nums against the bins

>>> import numpy as np
>>> nums = np.array([0.5, 0.7, 1.0, 1.2, 1.3, 2.1])
>>> bins = np.array([0, 1, 2, 3])
>>> np.histogram(nums, bins)

Get the powers of an array values element-wise

>>> import numpy as np
>>> x = np.arange(7)
>>> np.power(x, 3)

Write a NumPy program to get true division of the element-wise array inputs

>>> import numpy as np
>>> x = np.arange(10)
>>> np.true_divide(x, 3)

Panda

What is a series in pandas?

A Series is defined as a one-dimensional array that is capable of storing various data types. The row labels of the series are called the index. By using a ‘series’ method, we can easily convert the list, tuple, and dictionary into series. A Series cannot contain multiple columns.

What features make Pandas such a reliable option to store tabular data?

Memory Efficient, Data Alignment, Reshaping, Merge and join and Time Series.

What is re-indexing in pandas?

Reindexing is used to conform DataFrame to a new index with optional filling logic. It places NA/NaN in that location where the values are not present in the previous index. It returns a new object unless the new index is produced as equivalent to the current one, and the value of copy becomes False. It is used to change the index of the rows and columns of the DataFrame.

How will you create a series from dict in Pandas?

A Series is defined as a one-dimensional array that is capable of storing various data
types.

import pandas as pd
info = {‘x’ : 0., ‘y’ : 1., ‘z’ : 2.}
a = pd.Series(info)

How can we create a copy of the series in Pandas?

Use pandas.Series.copy method
import pandas as pd
pd.Series.copy(deep=True)

 

What is groupby in Pandas?

GroupBy is used to split the data into groups. It groups the data based on some criteria. Grouping also provides a mapping of labels to the group names. It has a lot of variations that can be defined with the parameters and makes the task of splitting the data quick and
easy.

What is vectorization in Pandas?

Vectorization is the process of running operations on the entire array. This is done to
reduce the amount of iteration performed by the functions. Pandas have a number of vectorized functions like aggregations, and string functions that are optimized to operate
specifically on series and DataFrames. So it is preferred to use the vectorized pandas functions to execute the operations quickly.

Different types of Data Structures in Pandas

Pandas provide two data structures, which are supported by the pandas library, Series,
and DataFrames. Both of these data structures are built on top of the NumPy.

What Is Time Series In pandas

A time series is an ordered sequence of data which basically represents how some quantity changes over time. pandas contains extensive capabilities and features for working with time series data for all domains.

How to convert pandas dataframe to numpy array?

The function to_numpy() is used to convert the DataFrame to a NumPy array.
DataFrame.to_numpy(self, dtype=None, copy=False)
The dtype parameter defines the data type to pass to the array and the copy ensures the
returned value is not a view on another array.

Write a Pandas program to get the first 5 rows of a given DataFrame

>>> import pandas as pd
>>> exam_data = {‘name’: [‘Anastasia’, ‘Dima’, ‘Katherine’, ‘James’, ‘Emily’, ‘Michael’, ‘Matthew’, ‘Laura’, ‘Kevin’, ‘Jonas’],}
labels = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’]
>>> df = pd.DataFrame(exam_data , index=labels)
>>> df.iloc[:5]

Develop a Pandas program to create and display a one-dimensional array-like object containing an array of data. 

>>> import pandas as pd
>>> pd.Series([2, 4, 6, 8, 10])

Write a Python program to convert a Panda module Series to Python list and it’s type. 

>>> import pandas as pd
>>> ds = pd.Series([2, 4, 6, 8, 10])
>>> type(ds)
>>> ds.tolist()
>>> type(ds.tolist())

Develop a Pandas program to add, subtract, multiple and divide two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 9])
>>> sum = ds1 + ds2
>>> sub = ds1 – ds2
>>> mul = ds1 * ds2
>>> div = ds1 / ds2

Develop a Pandas program to compare the elements of the two Pandas Series.

>>> import pandas as pd
>>> ds1 = pd.Series([2, 4, 6, 8, 10])
>>> ds2 = pd.Series([1, 3, 5, 7, 10])
>>> ds1 == ds2
>>> ds1 > ds2
>>> ds1 < ds2

Develop a Pandas program to change the data type of given a column or a Series.

>>> import pandas as pd
>>> s1 = pd.Series([‘100’, ‘200’, ‘python’, ‘300.12’, ‘400’])
>>> s2 = pd.to_numeric(s1, errors=’coerce’)
>>> s2

Write a Pandas program to convert Series of lists to one Series

>>> import pandas as pd
>>> s = pd.Series([ [‘Red’, ‘Black’], [‘Red’, ‘Green’, ‘White’] , [‘Yellow’]])
>>> s = s.apply(pd.Series).stack().reset_index(drop=True)

Write a Pandas program to create a subset of a given series based on value and condition

>>> import pandas as pd
>>> s = pd.Series([0, 1,2,3,4,5,6,7,8,9,10])
>>> n = 6
>>> new_s = s[s < n]
>>> new_s

Develop a Pandas code to alter the order of index in a given series

>>> import pandas as pd
>>> s = pd.Series(data = [1,2,3,4,5], index = [‘A’, ‘B’, ‘C’,’D’,’E’])
>>> s.reindex(index = [‘B’,’A’,’C’,’D’,’E’])

Write a Pandas code to get the items of a given series not present in another given series.

>> import pandas as pd
>>> sr1 = pd.Series([1, 2, 3, 4, 5])
>>> sr2 = pd.Series([2, 4, 6, 8, 10])
>>> result = sr1[~sr1.isin(sr2)]
>>> result

What is the difference between the two data series df[‘Name’] and df.loc[:’Name’]?

First one is a view of the original dataframe and second one is a copy of the original dataframe.

Write a Pandas program to display the most frequent value in a given series and replace everything else as “replaced” in the series.

>> >import pandas as pd
>>> import numpy as np
>>> np.random.RandomState(100)
>>> num_series = pd.Series(np.random.randint(1, 5, [15]))
>>> result = num_series[~num_series.isin(num_series.value_counts().index[:1])] = ‘replaced’

Write a Pandas program to find the positions of numbers that are multiples of 5 of a given series.

>>> import pandas as pd
>>> import numpy as np
>>> num_series = pd.Series(np.random.randint(1, 10, 9))
>>> result = np.argwhere(num_series % 5==0)

How will you add a column to a pandas DataFrame?

# importing the pandas library
>>> import pandas as pd
>>> info = {‘one’ : pd.Series([1, 2, 3, 4, 5], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]),
‘two’ : pd.Series([1, 2, 3, 4, 5, 6], index=[‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’])}
>>> info = pd.DataFrame(info)
# Add a new column to an existing DataFrame object
>>> info[‘three’]=pd.Series([20,40,60],index=[‘a’,’b’,’c’])

How to iterate over a Pandas DataFrame?

You can iterate over the rows of the DataFrame by using for loop in combination with an iterrows() call on the DataFrame.

Python

What type of language is python? Programming or scripting?

Python is capable of scripting, but in general sense, it is considered as a general-purpose
programming language.

Is python case sensitive?

Yes, python is a case sensitive language.

What is a lambda function in python?

An anonymous function is known as a lambda function. This function can have any
number of parameters but can have just one statement.

What is the difference between xrange and xrange in python?

xrange and range are the exact same in terms of functionality.The only difference is that
range returns a Python list object and x range returns an xrange object.

What are docstrings in python?

Docstrings are not actually comments, but they are documentation strings. These
docstrings are within triple quotes. They are not assigned to any variable and therefore,
at times, serve the purpose of comments as well.

Whenever Python exits, why isn’t all the memory deallocated?

Whenever Python exits, especially those Python modules which are having circular
references to other objects or the objects that are referenced from the global namespaces are not always de-allocated or freed. It is impossible to de-allocate those portions of
memory that are reserved by the C library. On exit, because of having its own efficient
clean up mechanism, Python would try to de-allocate/destroy every other object.

What does this mean: *args, **kwargs? And why would we use it?

We use *args when we aren’t sure how many arguments are going to be passed to a function, or if we want to pass a stored list or tuple of arguments to a function. **kwargs is used when we don’t know how many keyword arguments will be passed to a function, or it can be used to pass the values of a dictionary as keyword arguments.

What is the difference between deep and shallow copy?

Shallow copy is used when a new instance type gets created and it keeps the values that are copied in the new instance.
Shallow copy is used to copy the reference pointers just like it copies the values.
Deep copy is used to store the values that are already copied. Deep copy doesn’t copy the reference pointers to the objects. It makes the reference to an object and the new object that is pointed by some other object gets stored.

Define encapsulation in Python?

Encapsulation means binding the code and the data together. A Python class in a
example of encapsulation.

Does python make use of access specifiers?

Python does not deprive access to an instance variable or function. Python lays down the concept of prefixing the name of the variable, function or method with a single or double underscore to imitate the behavior of protected and private access specifiers.

What are the generators in Python?

Generators are a way of implementing iterators. A generator function is a normal function except that it contains yield expression in the function definition making it a generator function.

Write a Python script to Python to find palindrome of a sequence

a=input (“enter sequence”)
b=a [: : -1]
if a==b:
print (“palindrome”)
else:
print (“not palindrome”)

How will you remove the duplicate elements from the given list?

The set is another type available in Python. It doesn’t allow copies and provides some
good functions to perform set operations like union, difference etc.
>>> list(set(a))

Does Python allow arguments Pass by Value or Pass by Reference?

Neither the arguments are Pass by Value nor does Python supports Pass by reference.
Instead, they are Pass by assignment. The parameter which you pass is originally a reference to the object not the reference to a fixed memory location. But the reference is
passed by value. Additionally, some data types like strings and tuples are immutable whereas others are mutable.

What is slicing in Python?

Slicing in Python is a mechanism to select a range of items from Sequence types like
strings, list, tuple, etc.

Why is the “pass” keyword used in Python?

The “pass” keyword is a no-operation statement in Python. It signals that no action is required. It works as a placeholder in compound statements which are intentionally left blank.

What are decorators in Python?

Decorators in Python are essentially functions that add functionality to an existing function in Python without changing the structure of the function itself. They are represented by the @decorator_name in Python and are called in bottom-up fashion

What is the key difference between lists and tuples in python?

The key difference between the two is that while lists are mutable, tuples on the other hand are immutable objects.

What is self in Python?

Self is a keyword in Python used to define an instance or an object of a class. In Python, it is explicitly used as the first parameter, unlike in Java where it is optional. It helps in distinguishing between the methods and attributes of a class from its local variables.

What is PYTHONPATH in Python?

PYTHONPATH is an environment variable which you can set to add additional directories where Python will look for modules and packages. This is especially useful in maintaining Python libraries that you do not wish to install in the global default location.

What is the difference between .py and .pyc files?

.py files contain the source code of a program. Whereas, .pyc file contains the bytecode of your program. We get bytecode after compilation of .py file (source code). .pyc files are not created for all the files that you run. It is only created for the files that you import.

What is namespace in Python?

In Python, every name introduced has a place where it lives and can be hooked for. This is known as namespace. It is like a box where a variable name is mapped to the object placed. Whenever the variable is searched out, this box will be searched, to get the corresponding object.

What is pickling and unpickling?

Pickle module accepts any Python object and converts it into a string representation and dumps it into a file by using the dump function, this process is called pickling. While the process of retrieving original Python objects from the stored string representation is called unpickling.

How is Python interpreted?

Python language is an interpreted language. The Python program runs directly from the source code. It converts the source code that is written by the programmer into an intermediate language, which is again translated into machine language that has to be executed.

Jupyter Notebook

What is the main use of a Jupyter notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents. It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment.

How do I increase the cell width of the Jupyter/ipython notebook in my browser?

>> from IPython.core.display import display, HTML
>>> display(HTML(“<style>.container { width:100% !important; }</style>”))

How do I convert an IPython Notebook into a Python file via command line?

>> jupyter nbconvert –to script [YOUR_NOTEBOOK].ipynb

How to measure execution time in a jupyter notebook?

>> %%time is inbuilt magic command

How to run a jupyter notebook from the command line?

>> jupyter nbconvert –to python nb.ipynb

How to make inline plots larger in jupyter notebooks?

Use figure size.
>>> fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor=’w’, edgecolor=’k’)

How to display multiple images in a jupyter notebook?

>>for ima in images:
>>>plt.figure()
>>>plt.imshow(ima)

Why is the Jupyter notebook interactive code and data exploration friendly?

The ipywidgets package provides many common user interface controls for exploring code and data interactively.

What is the default formatting option in jupyter notebook?

Default formatting option is markdown

What are kernel wrappers in jupyter?

Jupyter brings a lightweight interface for kernel languages that can be wrapped in Python.
Wrapper kernels can implement optional methods, notably for code completion and code inspection.

What are the advantages of custom magic commands?

Create IPython extensions with custom magic commands to make interactive computing even easier. Many third-party extensions and magic commands exist, for example, the %%cython magic that allows one to write Cython code directly in a notebook.

Is the jupyter architecture language dependent?

No. It is language independent

Which tools allow jupyter notebooks to easily convert to pdf and html?

Nbconvert converts it to pdf and html while Nbviewer renders the notebooks on the web platforms.

What is a major disadvantage of a Jupyter notebook?

It is very hard to run long asynchronous tasks. Less Secure.

In which domain is the jupyter notebook widely used?

It is mainly used for data analysis and machine learning related tasks.

What are alternatives to jupyter notebook?

PyCharm interact, VS Code Python Interactive etc.

Where can you make configuration changes to the jupyter notebook?

In the config file located at ~/.ipython/profile_default/ipython_config.py

Which magic command is used to run python code from jupyter notebook?

%run can execute python code from .py files

How to pass variables across the notebooks in Jupyter?

The %store command lets you pass variables between two different notebooks.
>>> data = ‘this is the string I want to pass to different notebook’
>>> %store data
# Stored ‘data’ (str)
# In new notebook
>>> %store -r data
>>> print(data)

Export the contents of a cell/Show the contents of an external script

Using the %%writefile magic saves the contents of that cell to an external file. %pycat does the opposite and shows you (in a popup) the syntax highlighted contents of an external file.

What inbuilt tool we use for debugging python code in a jupyter notebook?

Jupyter has its own interface for The Python Debugger (pdb). This makes it possible to go inside the function and investigate what happens there.

How to make high resolution plots in a jupyter notebook?

>> %config InlineBackend.figure_format =’retina’

How can one use latex in a jupyter notebook?

When you write LaTeX in a Markdown cell, it will be rendered as a formula using MathJax.

What is a jupyter lab?

It is a next generation user interface for conventional jupyter notebooks. Users can drag and drop cells, arrange code workspace and live previews. It’s still in the early stage of development.

What is the biggest limitation for a Jupyter notebook?

Code versioning, management and debugging is not scalable in current jupyter notebook

Cloud Computing

Which are the different layers that define cloud architecture?

Below mentioned are the different layers that are used by cloud architecture:
● Cluster Controller
● SC or Storage Controller
● NC or Node Controller
● CLC or Cloud Controller
● Walrus

Explain Cloud Service Models?

Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Desktop as a service (Daas)

What are Hybrid clouds?

Hybrid clouds are made up of both public clouds and private clouds. It is preferred over both the clouds because it applies the most robust approach to implement cloud architecture.
The hybrid cloud has features and performance of both private and public cloud. It has an important feature where the cloud can be created by an organization and the control of it can begiven to some other organization.

Explain Platform as a Service (Paas)?

It is also a layer in cloud architecture. Platform as a Service is responsible to provide complete virtualization of the infrastructure layer, make it look like a single server and invisible for the outside world.

What is the difference in cloud computing and Mobile Cloud computing?

Mobile cloud computing and cloud computing has the same concept. The cloud computing becomes active when switched from the mobile. Moreover, most of the tasks can be performed with the help of mobile. These applications run on the mobile server and provide rights to the user to access and manage storage.

What are the security aspects provided with the cloud?

There are 3 types of Cloud Computing Security:
● Identity Management: It authorizes the application services.
● Access Control: The user needs permission so that they can control the access of
another user who is entering into the cloud environment.
● Authentication and Authorization: Allows only the authorized and authenticated the user
only to access the data and applications

What are system integrators in cloud computing?

System Integrators emerged into the scene in 2006. System integration is the practice of bringing together components of a system into a whole and making sure that the system performs smoothly.
A person or a company which specializes in system integration is called as a system integrator.

What is the usage of utility computing?

Utility computing, or The Computer Utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed and charges them for specific usage rather than a flat rate
Utility computing is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud. It facilitates users to pay only for what they use.

What are some large cloud providers and databases?

Following are the most used large cloud providers and databases:
– Google BigTable
– Amazon SimpleDB
– Cloud-based SQL

Explain the difference between cloud and traditional data centers.

In a traditional data center, the major drawback is the expenditure. A traditional data center is comparatively expensive due to heating, hardware, and software issues. So, not only is the initial cost higher, but the maintenance cost is also a problem.
Cloud being scaled when there is an increase in demand. Mostly the expenditure is on the maintenance of the data centers, while these issues are not faced in cloud computing.

What is hypervisor in Cloud Computing?

It is a virtual machine screen that can logically manage resources for virtual machines. It allocates, partition, isolate or change with the program given as virtualization hypervisor.
Hardware hypervisor allows having multiple guest Operating Systems running on a single host system at the same time.

Define what MultiCloud is?

Multicloud computing may be defined as the deliberate use of the same type of cloud services from multiple public cloud providers.

What is a multi-cloud strategy?

The way most organizations adopt the cloud is that they typically start with one provider. They then continue down that path and eventually begin to get a little concerned about being too dependent on one vendor. So they will start entertaining the use of another provider or at least allowing people to use another provider.
They may even use a functionality-based approach. For example, they may use Amazon as their primary cloud infrastructure provider, but they may decide to use Google for analytics, machine learning, and big data. So this type of multi-cloud strategy is driven by sourcing or procurement (and perhaps on specific capabilities), but it doesn’t focus on anything in terms of technology and architecture.

What is meant by Edge Computing, and how is it related to the cloud?

Unlike cloud computing, edge computing is all about the physical location and issues related to latency. Cloud and edge are complementary concepts combining the strengths of a centralized system with the advantages of distributed operations at the physical location where things and people connect.

What are disadvantages of SaaS cloud computing layer

1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a possibility that there may be greater latency when interacting with the application compared to local deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is in milliseconds.
3) Total Dependency on Internet
Without an internet connection, most SaaS applications are not usable.
4) Switching between SaaS vendors is difficult
Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over the internet and then converting and importing them into another SaaS also.

What is IaaS in Cloud Computing?

IaaS i.e. Infrastructure as a Service which is also known as Hardware as a Service .In this type of model, organizations usually gives their IT infrastructure such as servers, processing, storage, virtual machines and other resources. Customers can access the resources very easily on internet using on-demand pay model.

Explain what is the use of “EUCALYPTUS” in cloud computing?

EUCALYPTUS has an open source software infrastructure in cloud computing. It is used to add clusters in the cloud computing platform. With the help of EUCALYPTUS public, private, and hybrid cloud can be built. It can produce its own data centers. Moreover, it can allow you to use its functionality to many other organizations.
When you add a software stack, like an operating system and applications to the service, the model shifts to 1 / 4 model.
Software as a service. This is often because Microsoft’s Windows Azure Platform is best represented as presently using a SaaS model.

Name the foremost refined and restrictive service model?

The most refined and restrictive service model is PaaS. Once the service requires the consumer to use an entire hardware/software/application stack, it is using the foremost refined and restrictive service model.

Name all the kind of virtualization is also characteristic of cloud computing?

Storage, Application, CPU. To modify these characteristics, resources should be extremely configurable and versatile.

What Are Main Features Of Cloud Services?

Some important features of the cloud service are given as follows:
• Accessing and managing the commercial software.
• Centralizing the activities of management of software in the Web environment.
• Developing applications that are capable of managing several clients.
• Centralizing the updating feature of software that eliminates the need of downloading the upgrades

What Are The Advantages Of Cloud Services?

Some of the advantages of cloud service are given as follows:
• Helps in the utilization of investment in the corporate sector; and therefore, is cost saving.
• Helps in the developing scalable and robust applications. Previously, the scaling took months, but now, scaling takes less time.
• Helps in saving time in terms of deployment and maintenance.

Mention The Basic Components Of A Server Computer In Cloud Computing?

The components used in less expensive client computers matches with the hardware components of server computer in cloud computing. Although server computers are usually built from higher-grade components than client computers. Basic components include Motherboard,
Memory, Processor, Network connection, Hard drives, Video, Power supply etc.

What are the advantages of auto-scaling?

Following are the advantages of autoscaling
● Offers fault tolerance
● Better availability
● Better cost management

Azure Cloud

Which Services Are Provided By Window Azure Operating System?

Windows Azure provides three core services which are given as follows:
• Compute
• Storage
• Management

AWS Cloud

Explain what S3 is?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any
amount of data, at any time and from anywhere on the web. For S3, the payment model is “pay as you go.”

What is AMI?

AMI stands for Amazon Machine Image. It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud. You can launch instances from as many different AMIs as you need.

Mention what the relationship between an instance and AMI is?

From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different computer and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.

How many buckets can you create in AWS by default?

By default, you can create up to 100 buckets in each of your AWS accounts.

Explain can you vertically scale an Amazon instance? How?

Yes, you can vertically scale on Amazon instance. For that
● Spin up a new larger instance than the one you are currently running
● Pause that instance and detach the root webs volume from the server and discard
● Then stop your live instance and detach its root volume
● Note the unique device ID and attach that root volume to your new server
● And start it again

Explain what T2 instances is?

T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by the workload.

In VPC with private and public subnets, database servers should ideally be launched into which subnet?

With private and public subnets in VPC, database servers should ideally launch into private subnets.

Mention what the security best practices for Amazon EC2 are?

For secure Amazon EC2 best practices, follow the following steps
● Use AWS identity and access management to control access to your AWS resources
● Restrict access by allowing only trusted hosts or networks to access ports on your instance
● Review the rules in your security groups regularly
● Only open up permissions that you require
● Disable password-based login, for example, launched from your AMI

Is the property of broadcast or multicast supported by Amazon VPC?

No, currently Amazon VPI not provide support for broadcast or multicast.

How many Elastic IPs is allows you to create by AWS?

5 VPC Elastic IP addresses are allowed for each AWS account.

Explain default storage class in S3

The default storage class is a Standard frequently accessed.

What are the Roles in AWS?

Roles are used to provide permissions to entities which you can trust within your AWS account.
Roles are very similar to users. However, with roles, you do not require to create any username and password to work with the resources.

What are the edge locations?

Edge location is the area where the contents will be cached. So, when a user is trying to accessing any content, the content will automatically be searched in the edge location.

Explain snowball?

Snowball is a data transport option. It used source appliances to a large amount of data into and out of AWS. With the help of snowball, you can transfer a massive amount of data from one place to another. It helps you to reduce networking costs.

What is a redshift?

Redshift is a big data warehouse product. It is fast and powerful, fully managed data warehouse service in the cloud.

What is meant by subnet?

A large section of IP Address divided into chunks is known as subnets.

Can you establish a Peering connection to a VPC in a different region?

Yes, we can establish a peering connection to a VPC in a different region. It is called inter-region VPC peering connection.

What is SQS?

Simple Queue Service also known as SQS. It is distributed queuing service which acts as a mediator for two controllers.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

What is Amazon EMR?

EMR is a survived cluster stage which helps you to interpret the working of data structures before the intimation. Apache Hadoop and Apache Spark on the Amazon Web Services helps you to investigate a large amount of data. You can prepare data for the analytics goals and marketing intellect workloads using Apache Hive and using other relevant open source designs.

What is boot time taken for the instance stored backed AMI?

The boot time for an Amazon instance store-backend AMI is less than 5 minutes.

Do you need an internet gateway to use peering connections?

Yes, the Internet gateway is needed to use VPC (virtual private cloud peering) connections.

How to connect an EBS volume to multiple instances?

We can’t be able to connect EBS volume to multiple instances. Although, you can connect
various EBS Volumes to a single instance.

What are the different types of Load Balancer in AWS services?

Three types of Load balancer are:
1. Application Load Balancer
2. Classic Load Balancer
3. Network Load Balancer

In which situation you will select provisioned IOPS over standard RDS storage?

You should select provisioned IOPS storage over standard RDS storage if you want to perform batch-related workloads.

What are the important features of Amazon cloud search?

Important features of the Amazon cloud are:
● Boolean searches
● Prefix Searches
● Range searches
● Entire text search
● AutoComplete advice

Google Cloud Platform

What are the main advantages of using Google Cloud Platform?

Google Cloud Platform is a medium that provides its users access to the best cloud services and features. It is gaining popularity among the cloud professionals as well as users for the advantages if offer.
Here are the main advantages of using Google Cloud Platform over others –
● GCP offers much better pricing deals as compared to the other cloud service providers
● Google Cloud servers allow you to work from anywhere to have access to your
 information and data.
● Considering hosting cloud services, GCP has an overall increased performance and
service
● Google Cloud is very fast in providing updates about server and security in a better and
more efficient manner
● The security level of Google Cloud Platform is exemplary; the cloud platform and
networks are secured and encrypted with various security measures.
If you are going for the Google Cloud interview, you should prepare yourself with enough
knowledge of Google Cloud Platform. 

Why should you opt to Google Cloud Hosting?

The reason for opting Google Cloud Hosting is the advantages it offers. Here are the
advantages of choosing Google Cloud Hosting:
● Availability of better pricing plans
● Benefits of live migration of the machines
● Enhanced performance and execution
● Commitment to Constant development and expansion
● The private network provides efficiency and maximum time
● Strong control and security of the cloud platform
● Inbuilt redundant backups ensure data integrity and reliability

What are the libraries and tools for cloud storage on GCP?

At the core level, XML API and JSON API are there for the cloud storage on Google
Cloud Platform. But along with these, there are following options provided by Google to interact with the cloud storage.
● Google Cloud Platform Console, which performs basic operations on objects and
buckets
● Cloud Storage Client Libraries, which provide programming support for various
languages including Java, Ruby, and Python
● GustilCommand-line Tool, which provides a command line interface for the cloud storage

There are many third party libraries and tools such as Boto Library.

What do you know about Google Compute Engine?

Google Cloud Engine is the basic component of the Google Cloud Platform. 
Google Compute Engine is an IaaS product that offers self-managed and flexible virtual
machines that are hosted on the infrastructure of Google. It includes Windows and Linux based virtual machines that may run on local, KVM, and durable storage options.
It also includes REST-based API for the control and configuration purposes. Google Compute Engine integrates with GCP technologies such as Google App Engine, Google Cloud Storage, and Google BigQuery in order to extend its computational ability and thus creates more sophisticated and complex applications.

How are the Google Compute Engine and Google App Engine related?

Google Compute Engine and Google App Engine are complementary to each other. Google Compute Engine is the IaaS product whereas Google App Engine is a PaaS product of Google.
Google App Engine is generally used to run web-based applications, mobile backends, and line of business. If you want to keep the underlying infrastructure in more of your control, then Compute Engine is a perfect choice. For instance, you can use Compute Engine for the
implementation of customized business logic or in case, you need to run your own storage
system.

 

References

Steve Nouri

https://www.edureka.co

https://www.kausalvikash.in

https://www.wisdomjobs.com

https://blog.edugrad.com

https://stackoverflow.com

http://www.ezdev.org

https://www.techbeamers.com

https://www.w3resource.com

https://www.javatpoint.com

https://analyticsindiamag.com

Online Interview Questions

https://www.geeksforgeeks.org

https://www.springpeople.com

https://atraininghub.com

https://www.interviewcake.com

https://www.techbeamers.com

https://www.tutorialspoint.com

programming with mosh.com

https://www.interviewbit.com

https://www.guru99.com

https://hub.packtpub.com

https://analyticsindiamag.com

https://www.dataquest.io

https://www.infoworld.com

10 Commandments of Options Trading/Strategies

Options Trading/Strategies
Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Option strategies are the simultaneous, and often mixed, buying or selling of one or more options that differ in one or more of the options’ variables. Call options, simply known as calls, give the buyer a right to buy a particular stock at that option’s strike price. Conversely, put options, simply known as puts, give the buyer the right to sell a particular stock at the option’s strike price. This is often done to gain exposure to a specific type of opportunity or risk while eliminating other risks as part of a trading strategy. A very straightforward strategy might simply be the buying or selling of a single option; however, option strategies often refer to a combination of simultaneous buying and or selling of options.

Options strategies allow traders to profit from movements in the underlying assets based on market sentiment (i.e., bullish, bearish or neutral). In the case of neutral strategies, they can be further classified into those that are bullish on volatility, measured by the lowercase Greek letter sigma (σ), and those that are bearish on volatility. Traders can also profit off time decay, measured by the uppercase Greek letter theta (Θ), when the stock market has low volatility. The option positions used can be long and/or short positions in calls and puts.

Below are the 10 Commandments of Options Trading:

  • Thou shall always take 100% daily gains or 200% all time gains.
  • Do not fall into temptation and buy during the first 30 minutes of market open. (Selling positions is still permitted)
  • Thou shall not buy calls on green days.
  • Thou shall not buy puts on red days.
  • Avoid greed and do not buy consecutive options on 1 company.
  • Give thyself at least 3 weeks time to play the option.
  • End your suffering and sell if down 50% all time on an option play.
  • Avoid gluttony and do not day trade options. (Swing trades allowed)
  • Be fruitful, multiply earnings and sell covered calls if holding any.
  • Celebrate and binge drink after big gains (or losses)
  • Off topic, but relevant – You absolutely need to be doing a 401k or IRA as well as investing in crypto: 401ks and IRAs offer fantastic tax advantages that straight investing does not. Also if you have an employer who matches you are leaving money on the table by not taking advantage of that. It’s foolish. Crypto is great and should definitely be in your portfolio but it should not be your whole portfolio.
    Sources:
    1- WallStreetBets
    2- Wikipedia

How crypto could change the world and Why Cryptocurrency was invented in the first place.

How crypto could change the world and Why Cryptocurrency was invented in the first place.
Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

How crypto could change the world and Why Cryptocurrency was invented in the first place.

People used to pay each other in gold and silver. Difficult to transport. Difficult to divide.

Paper money was invented. A claim to gold in a bank vault. Easier to transport and divide.

Banks gave out more paper money than they had gold in the vault. They ran “fractional reserves”. A real money maker. But every now and then, banks collapsed because of runs on the bank.

Central banking was invented. Central banks would be lenders of last resort. Runs on the bank were thus mitigated by banks guaranteeing each other’s deposits through a central bank. The risk of a bank run was not lowered. Its frequency was diminished and its impact was increased. After all, banks remained basically insolvent in this fractional reserve scheme.

Bright Data
Bright Data: Unblocker

Banks would still get in trouble. But now, if one bank got in sufficient trouble, they would all be in trouble at the same time. Governments would have to step in to save them.

All ties between the financial system and gold were severed in 1971 when Nixon decided that the USD would no longer be exchangeable for a fixed amount of gold. This exacerbated the problem, because there was now effectively no limit anymore on the amount of paper money that banks could create.

From this moment on, all money was created as credit. Money ceased to be supported by an asset. When you take out a loan, money is created and lent to you. Banks expect this freshly minted money to be returned to them with interest. Sure, banks need to keep adequate reserves. But these reserves basically consist of the same credit-based money. And reserves are much lower than the loans they make.

This led to an explosion in the money supply. The Federal Reserve stopped reporting M3 in 2006. But the ECB currently reports a yearly increase in the supply of the euro of about 5%.

This leads to a yearly increase in prices. The price increase is somewhat lower than the increase in the money supply. This is because of increased productivity. Society gets better at producing stuff cheaper all the time. So, in absence of money creation you would expect prices to drop every year. That they don’t is the effect of money creation.

What remains is an inflation rate in the 2% range.

Banks have discovered that they can siphon off all the productivity increase + 2% every year, without people complaining too much. They accomplish this currently by increasing the money supply by 5% per year, getting this money returned to them at an interest.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Apart from this insidious tax on society, banks take society hostage every couple of years. In case of a financial crisis, banks need bailouts or the system will collapse.

Apart from these problems, banks and governments are now striving to do away with cash. This would mean that no two free men would be able to exchange money without intermediation by a bank. If you believe that to transact with others is a fundamental right, this should scare you.

The absence of sound money was at the root of the problem. We were force-fed paper money because there were no good alternatives. Gold and silver remain difficult to use.

When it was tried to launch a private currency backed by precious metals (Liberty dollar), this initiative was shut down because it undermined the U.S. currency system. Apparently, a currency alternative could only thrive if “nobody” launched it and if they was no central point of failure.

What was needed was a peer-to-peer electronic cash system. This was what Satoshi Nakamoto described in 2008. It was a response to all the problems described above. That is why he labeled the genesis block with the text: “03/Jan/2009 Chancellor on brink of second bailout for banks.”. Bitcoin was meant to be an alternative to our current financial system.

So, if you find yourself religiously checking some cryptocurrency’s price, or bogged down in discussions about the “one true bitcoin”, or constantly asking what currency to buy, please at least remember that we have bigger fish to fry.

We are here to fix the financial system.

Given how early in the Rogers Adoption Curve for Crypto we are, I would like to take a moment so we can just imagine what this technological revolution, which I consider is the next huge step for human kind, could bring. I will emphasize some socioeconomic implications of descentralization, but I`m mostly interested in listening to, and debating your inputs.

Blockchain and Crypto Currency are here to change the world forever.

The implications of decentralization

As you may know one of the core proposals of blockchain is decentralization, and with it we can optimize so many processes that this alone could be the revolution we are talking about. By eliminating intermediaries, we can save on the cost they add to the supply chain ensuring those that create the value, keep it. Or we can simply save on fees.

To quote the man himself:

Whereas most technologies tend to automate workers on the periphery doing menial tasks, blockchains automate away the center. Instead of putting the taxi driver out of a job, blockchain puts Uber out of a job and lets the taxi drivers work with the customer directly. – Vitalik Buterin.

To put it simply, imagine that you replace Binance (a centralized company) with a robot. A robot that you have programed so well, whose code you publicly audit, and that is so safe you can trust it with billions of dollars in liquidity pools, so it proceeds to host and operate the trading platform by itself. In case you didn’t know, this is already a reality! Many people here trade on those platforms on a daily basis.

But this goes beyond replacing Centralized Exchanges with Automated Market Makers, Airbnb with a blockchain DApp that connects landlords and costumers, or even banks with complex smart contracts that allow you to borrow, save, tokenize physical assets, and so on. This goes way beyond.

Here is where I start to fantasize of the future. Think about replacing capital itself, think about getting rid of corporations. Lets dream of a world with DAOs massive adoption.

With DeFi, we may no longer need a company like Nestlé…

And specially not their investors. Of course, you will still need the people administrating, planning, monitoring, generating new ideas that adapt to their context, and creating innovative solutions for a complex world only humans can comprehend. But the figure of shareholders and CEOs that steal all the value that workers create and leave them with a tiny fraction of it, can disappear. This can be the basis of a once in a century transformation.

Just as an example: Nestlè’s coffee growers in Colombia keep less than 10% of the final sale price, and barely make a living on it, so are actually abandoning the rural areas.

With Blockchain, DeFi and Smart Contracts, people like you and me can collectively fund such an operation, and then agree upon specific terms like wages by direct democracy, voting with our crypto holdings. Then we would proceed to allocate funds, hire “developers” which would ultimately be regular office jobs that keep the organization functioning. Once in operation we would frequently vote on decisions and results, which would ultimately keep the highest level of accountability for people working in the organization. This is already happening by the way, this is how some blockchain projects work today. We just haven’t applied it to industrial and physical supply chains yet.

Let’s go back to our project to replace Nestle. Imagine that an organization’s main goal is not to maximize profits for shareholders and bonuses for CEOs anymore. Instead, it’s the interest of regular people and the company’s collaborators that drive its actions.

Most likely, you and I will want to consolidate an efficient and effective supply chain, that is sustainable and keeps the dignity and wellbeing of its collaborators as a guiding principle. We are not longer at their mercy on issues like climate change, we can now take immediate action against it, or stop endangering and hoarding water supplies in classic Nestle fashion.

Also, we are making profits, so we are redistributing capital, and improving our quality of life, which will be most notorious in the most vulnerable communities, usually those that extract/harvest/mine raw materials.

This is what could happen with the blockchain descentralization of business. And you could apply it to pretty much anything, but maybe initially it could be for low labor and capital intensive businesses.

I’ll give you another example. I work for a solar power multinational company. If you don’t know it, solar energy is essentially a financial product, most people working in these companies don’t care about the world, its simply that solar is a very safe and lucrative hustle, and all investors care about is having a nice return of investment (ROI). As of now, my company works exclusively for large scale corporate clients or the state itself, given that’s where the nice ROIs are, since they give you the projects that allow you to place large capitals at once. This means, as of today, we blatantly ignore the regular people that seek for our help and funding to power their farms and/or houses with solar energy. They’re not that profitable my boss tells me. This is shitty, and I’ve thought of quitting several times.

But back to the point. Now, imagine once again, we get rid of the institutional investors. Now you and me create Reddit Solar Co, a DAO. Our only purpose is to facilitate access to electricity to those without it, and to advance in the urban implementation of renewable energy. We help the world, make dividends that are automatically distributed by the DAO, and also our own Crypto is rising in value.

And this is not the best.

Let’s not forget of synergies.

So, we just created a DAO that manufactures and distributes food globally right? Or maybe Reddit Solar Co. As an organization born on the blockchain, we won’t have to adapt to the state of the art innovations on the crypto world like an old steam locomotive attempting to adapt a warp drive on top of it. We were born in space.

From the beginning, our Ethereum based DAO could adopt VeChain’s solution for supply chains, Cardano will help us to give an integral solution to the unbanked communities that provide our raw material, they now have IDs, access to DeFi and education. The land deeds and legal documents that relate to our enterprise are certified by LTO Network, we move money internationally with XRP or Stellar, and don’t worry, we use Polkadot to ensure proper blockchain interoperability.

Too complex for you? Don’t worry, you don’t even have to know or care about this, leave that to others. You’re into finance. Maybe sales is your thing and there’s a little Michael Scott in you. Or you`re into social work and want to supervise our community engagement at the start of the supply chain. Just go do your thing! You don’t necessarily have to be involved in all of this.

All you know is you do your job and receive your crypto salary.

Just as computers and the internet changed the world forever, and not only had economic implications but also changed our culture, routines, work lives and ways to interact with each other, crypto will. We are just so early; that all we can do for now is dream.

You’re having too much hope in humanity dude…

Sure, I may be making some optimistic assumptions on the motivations of humans, I may be saying that we will use this technology for good, and that we care about each other, and that’s one way to look at it. But we could also argue in favor of this from a sceptic perspective: even if you don’t care about the collective wellbeing of your community, it’s in your interest to live in a safer environment right? Ergo you want to reduce poverty. Its also in your interest to stop global warming so organized human life can continue to exist, or to make sure you and your children will have water and food in 50 years, that’s why you will want to use technology for good even if you only care about yourself. Also lets not forget the powerful incentive of profits. Crypto has the clear potential to achieve all of this.

Most of the current generation of crypto projects will be ready and operating within the next 3 years, so all we will need by then is the will to use this technology for good, and the vision to change the world.

This is just the beginning, we will be killing industries but giving birth to others we could have never imagined before.

Cons of Crypto:
A coin called “Chia” is gobbling up 1,125,000 TB storage per day. Just to farm this token that no one seems to use. This takes resource wastage to a whole new level.

Chia is a coin that works on a proof of time space consensus. I.e. to farm this coin, one must allot dedicated hard drives and allot the space (known as plots), and get rewarded for it. Sounds good on paper, and one could even be tempted to think they may put that spare 500 GB space left and earn some passive income on it.

Except, this one already requires industrial grade storage space, just to farm a token that has almost zero adoption anywhere.

As you can see from this coin’s explorer, the storage is growing by almost 1000 PiB per day, in the last few days.

https://www.chiaexplorer.com/charts/netspace

1 PiB = 1125.9 TB.

So a growth of 1000 PiB per day => almost 1125000 TB of storage per day is added onto this network, just to mine these coins. This equates to 1.1 million 1 TB drives added per day just to support farming on this network!

Pros of Crypto:
– People in Hong Kong Use The Crypto and Blockchain To Fight Against Media Censorship
Reference

Data indicates that 76% of Bitcoin investors are still in profit

Bitcoin Pro Arguments:

Bright Data: Collector
Bright Data: Collector
  • Network effect and staying power
    BTC is the first virtual currency to solve the double-spending issue. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin undoubtedly has a ‘brand’. It has perhaps the most substantial name recognition of any existing crypto asset and is basically synonymous with ‘cryptocurrency’ to the lay public.
  • Despite near constant proclamations of its demise, Bitcoin has not died. One could argue that – as the progenitor of cryptocurrencies – its longevity and continued profitability is itself an investment thesis.
  • As the number of public addresses, daily active users (DAU), and large holders/long term holders continue to trend upwards, it becomes harder and harder to ‘put the genie back in the bottle’:
  • Bitcoin’s valuation is well described by the most fundamental factor intrinsic to its network: the number of addresses that hold BTC. Applying Metcalfe’s law, the total value of Bitcoin’s network is well explained, with an R squared of 93.8%, simply by the square of its user base, n.
  • Store of value to hedge inflation
  • Over its lifetime, narratives of Bitcoin’s value have gone through several shifts, from the original cypherpunk vision in the white paper of p2p ‘e-cash’ to today’s ‘digital gold’ narrative.
  • One theme underlying both of these points, however, is a reaction to or distrust in the current financial system. This was true during the financial crisis of 2008 (see the genesis block message) and is still relevant today with unprecedented levels of monetary and fiscal stimulus being pursued by governments worldwide. Government deficits and central bank money printing may lead to inflation and thus drive investors towards assets like gold or Bitcoin to preserve their wealth.
  • This notion that BTC is a store of value to hedge inflation has certainly caught on in the last few years – not just from institutional or hedge fund investors, but from companies like MicroStrategy, Square and Tesla adding BTC to their balance sheets.
  • Like gold, BTC is scarce – only 21M will ever exist. It is estimated that 3M-3.7M BTC have been lost forever/will never enter circulating supply again.. One estimate is that 14.5M BTC are essentially illiquid.
  • To take one example, Grayscale’s BTC trust – which has no redemption process and thus effectively takes BTC out of circulation – alone holds over 600k BTC.
  • Like gold, BTC is also divisible, interchangeable and durable. Unlike gold, however, BTC is a digital asset and is thus easier to purchase, move and store.
  • If the store of value narrative endures, Bitcoin may have significant upside in supplanting a share of gold’s use case (estimated to be a $10T asset class).
  • Development
  • One of the common counterarguments for Bitcoin is that it is a ‘dinosaur’ with little technological improvement or development (as compared to its more innovative successors).
  • Schisms in the dev community notwithstanding, Bitcoin remains an open-source project with global development communities and activity
  • Developments of note include:
  • Segregated Witness (SegWit): a protocol upgrade proposal that went live in August 2017. This protocol upgrade effectively increased the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second (TPS)
  • Lightning Network: is a second-layer micropayment solution for scalability
  • Taproot: an anticipated upgrade to increase privacy and improve upon other factors related to complex transactions
  • While other blockchains boast enterprise development, some companies are indeed building on Bitcoin. For example, Microsoft recently launched a Decentralized Identifier (DID) network (ION) on the Bitcoin mainnet
  • Ideological foundation for a potentially new financial system, without the old, decrepit, and corrupt banks and middle men.
  • The Environmental Argument is almost pointless, as it is the most efficient way of transporting millions of dollars around the world in mere seconds. And I mean efficient in all ways, there us no other single asset in the world capable of transporting this amount of capital wealth with such a low environmental impact or financial cost. If not, try moving 4 millions dollars of gold. Also, as Btc increases in value, this gets more on more efficient.
  • Innovation of the technology and the first mover advantage in capturing this new market’s value/future value. Btc will always be at the top as mainstream adoption continues relating Crypto=Bitcoin.
  • Ability to be bankless, with proven liquidity (thanks to Tesla) and with the best performing asset creation-to-date.
  • Inability of third parties to do anything about your Btc holding without the seed phrase. Government’s can hardly tax it if, as Michael Saylor put it: “I had a boating accident and forgot my seedphrase, I don’t have acces to my crypto anymore so I can’t be taxed”. In a way, nobody but yourself can prove that you still have access to those funds, so, can they truly be taxable?
  • The S2F model and updated S2F XA model. So far they have been scarily precise. Otherwise, Metcalfe’s law assures anyone that bitcoin may never go to 0, as the network is already strong enough to provide a certain degree of value.

Bitcoin CONS Arguments:

  • Bitcoin has been around way too long, and to the uneducated it is the face of the crypto world.
  • Bitcoin has no smart contracts.
  • Bitcoin is slow.
  • Bitcoin fees are expensive.
  • People see it as an investment, not a currency they can use and spend. In the end this is not defined as it’s supposed to be used, but only as store of value. It’s at the state of gold, not of a coin.
  • Bitcoin has become outdated, the only thing it’s useful for is investing, day to day transactions are useless.
  • Bitcoin’s largest advantage and in fact it’s greatest disadvantage is that it’s the oldest cryptocurrency. Since then technology has evolved so much to become more energy and time efficient.
  • Bitcoin is like the grandpa of crypto and we should look at it as such. Admire it for its wisdom because it has taught us so much, but also acknowledge that each of its children are trying to make their own marks on the world.
  • It’s huge environmental impact due to its proof-of-work concept. BTC has a carbon footprint like Singapore, uses as much electrical energy as the Netherlands, and produces as much electronic waste as Luxembourg. This is a huge problem and needs to be accepted more widely.
  • It’s slow. with an average transaction time of like 10 minutes, we are pretty far from instant transactions – this might not be a problem in all cases, but is one when one would like to use it like a currency, as it was planned originally
  • High transaction costs – not ETH-high, but too high
  • Bitcoin takes a lot of energy to mine and use. As of May 2021, a single Bitcoin transaction takes as much energy as 760,201 VISA credit card payments (source). To keep this in context, the world banking system uses about two times as much energy as the Bitcoin network (source)
  • Bitcoin is difficult to mine. GPUs and CPUs don’t have enough computing power to compete with other miners, meaning so-called Application-Specific Integrated Chips (ASICs) are required. These are expensive – generally in the range of $1000 to $6000, depending on how new the model is (source). This restricts Bitcoin’s mining pool to people and groups who have enough wealth to invest in ASICs, which threatens the goal of keeping cryptocurrency decentralized.
  • Bitcoin transactions can take a long time to be confirmed. The average time for a transaction to confirmed once is 10 minutes (source), but for a payment to be absolutely final, it needs to be included in multiple blocks to ensure consensus in the mining pool. This takes even longer, sometimes up to one hour (source, for 6 confirmations).
  • Bitcoin transactions require expensive mining fees. At the moment, the average fee for a single transaction is $14.35, making Bitcoin unsuitable for day to day use (source).
  • Bitcoin lacks many features available in other coins, including smart contracts (programs run on and enforced by the blockchain, see here), anonymity (source), and CPU mining (allowing anyone with a CPU to mine, thus making the network more democratic and less susceptible to being taken over by large groups).

Crypto is definitely a good way to make money. However, you might end up finding the tech interesting. I know that I sure did, and having a sound understanding of your investment will make a big difference in your ability to hodl. It doesn’t have to be much, just a few YouTube videos.

Strategies when it comes to cryptocurrencies
The HODL’er: you buy and basically you never sell. It’s kind of the holy grail of strategies when it comes to crypto according to this sub. Buy and forget and check back 10 years later. You’re a millionaire, Harry! No stress and no maintenance. You can even buy more over time and continue stacking your fat holdings. Do this if you believe in crypto long term

The Goal Setter: set a goal and sell when you reach that goal. Maybe it’s 3x and I’m out. Or maybe it’s make enough for student loans and I’m out. Or maybe it’s $1MM and sell half. Can be anything. Stress depends on your goal.

The Active Trader: Buy high and sell low

The Swing Trader: Some people are good at trading – they usually wait for those days where the whole market bleeds 20-30% in a day then they buy and wait for the bounce and they sell. Rinse and repeat. But they also risk missing out on the rocket jumps. But they also minimize the risk of being in the market when there’s a crash. In the end they might be able to increase their total holdings but for most beginners they lose rather than win. High stress and high maintenance.

The Cycle Trader: you DCA in during the bear market when everything has lost 80-90% of its ATH (alternatively, a year before the Bitcoin halving). Then you slowly sell off everything approximately a year after crypto starts trending up and enters a bull market. So this method has worked well for many people – they don’t necessarily time the top right but they continue to increase their holdings over several cycles. This might be the smart move if you have discipline. The risk is that history no longer repeats itself. It has worked the past 2 cycles but it’s not guaranteed it’ll work again. Medium stress, low maintenance

The Arbitrager: usually they have algos do the trading for them. They minimize risk and just arbitrage the price differences between exchanges. They might not care about crypto and just want to make money. They miss out on the bull run but also miss out on the bear market. Low stress, medium maintenance.

The Moon Chaser: 1000x or bust. Forget $10K eth or $100K btc, they want the next shiba or safe moon. They buy coins with market caps in the millions and hope for the pump to sell. This is like the lottery ticket buyers of crypto. High stress, high maintenance, smooth brain

The correct mentality for investing in the crypto market is thinking in YEARS not MONTHS.

Crypto: What to do in the bear market

HODL, dont sell with a loss if you believe in your Coin long term.

Stake, staking is really important! I cant tell you enough, if we are in a bear market and you can stake for a few years you can easily get 20-30% more coins then you have right now.

DCA, keep buying. The bear market is where you DCA, dont stop buying. Right now is where you can get coins cheap! Just dont stop DCAing cause you are scared! Pick projects you believe in long term and keep buying at low prices!

Get rid of coins you dont believe in long term, shitcoins. Many wont survive the bear market.

Research coins for the next bull run!

Crypto Currency Market Cap Visualized during the Pandemic

Top 100 Cryptocurrencies by Market Cap

Top 100 Cryptocurrencies by Market Cap
Data Source from https://coinmarketcap.com/

Latest News on Crypto:

Sources:

1- Reddit

2- Reddit

3- https://research.binance.com/en/projects/bitcoin

4- NYDIG Power of Bitcoins Network Effect

5- The original Cypherphunk vision

6- Unlike Gold, BTC is a digital asset that is easy to move around

7-  https://coinmarketcap.com/historical/

Data Sciences – Top 400 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes

Data Sciences - Data Analytics
Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains.

In this blog, we are going to provide popular open source and public data sets, data visualization, data analytics and data lakes.

Researchers from IBM, MIT and Harvard Announced The Release Of DARPA “Common Sense AI” Dataset Along With Two Machine Learning Models At ICML 2021

Building machines that can make decisions based on common sense is no easy feat. A machine must be able to do more than merely find patterns in data; it also needs a way of interpreting the intentions and beliefs behind people’s choices.

At the 2021 International Conference on Machine Learning (ICML), Researchers from IBM, MIT, and Harvard University have come together to release a DARPA “Common Sense AI” dataset for benchmarking AI intuition. They are also releasing two machine learning models that represent different approaches to the problem that relies on testing techniques psychologists use to study infants’ behavior to accelerate the development of AI exhibiting common sense. 

Source – Summary – Paper – IBM Blog

100 million protein structures Dataset by DeepMind

DeepMind creates ‘transformative’ map of human proteins drawn by AI. By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,

Here’s a good article about this topic

Google Dataset Search

Google Dataset Search

Malware traffic dataset

Comprises 1914081 records created from all malware traffic analysis .net PCAP files, from 2013 to 2021. The logs are generated using Suricata and Zeek.

Originator: https://twitter.com/ali_alwashali

Percent of “foreign-born” population in each US and EU state or country.

For the EU, “foreign-born” mean being born outside of any of the EU countries. For the US, “foreign-born” mean being born outside of any US state 🇺🇸🇪🇺

Author: Here

Percent of “foreign-born” population in each US and EU state or country. For the EU, “foreign-born” mean being born outside of any of the EU countries. For the US, “foreign-born” mean being born outside of any US state.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Examples of “foreign-born” in this context:

  • Person born in Spain and living in France is NOT “foreign-born”

  • Person born in Turkey and living in France is “foreign-born”

  • Person born in Florida and living in Texas is NOT “foreign-born”

  • Person born in Mexico and living in Texas is “foreign-born”

  • Person born in Florida and living in France is “foreign-born”

  • Person born in France and living in Florida is “foreign-born”

🇺🇸🇪🇺🗺️

Note: Poland, Ireland, Germany, Greece, Cyprus, Malta, Portugal uses Eurostat 2010 Migration data and Croatia has no data at all

https://www.statista.com/statistics/312701/percentage-of-population-foreign-born-in-the-us-by-state/

https://ec.europa.eu/eurostat/statistics-explained/pdfscache/1275.pdf

https://ec.europa.eu/eurostat/documents/3433488/5579176/KS-SF-11-034-EN.PDF/63cebff3-f7ac-4ca6-ab33-4e8792c5f30c

Tools: MS Office

Source: Here

35% of “entry-level” jobs on LinkedIn require 3+ years of experience

r/dataisbeautiful - [OC] 35% of "entry-level" jobs on LinkedIn require 3+ years of experience

Source: LinkedIn data  (see original post)

Tool: Photoshop from my colleague

Latest complete Netflix movie dataset

Created from 4 APIs. 11K+ rows and 30+ attributes of Netflix (Ratings, earnings, actors, language, availability, movie trailers, and many more)

Dataset on Kaggle.

Explore this dataset using FlixGem.com (this dataset is powering this webapp)

Dataset on Google Sheets.

Common Crawl

A corpus of web crawl data composed of over 50 billion web pages. The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata and text extractions.

AWS CLI Access (No AWS account required)

aws s3 ls s3://commoncrawl/ --no-sign-request

s3://commoncrawl/crawl-data/CC-MAIN-2021-17 – April 2021

 Dataset on protein prices

Data on Primary Commodity Prices are updated monthly based on the IMF’s Primary Commodity Price System.

Excel Database

 CPOST dataset on suicide attacks over four decades

The University of Chicago Project on Security and Threats presents the updated and expanded Database on Suicide Attacks (DSAT), which now links to Uppsala Conflict Data Program data on armed conflicts and includes a new dataset measuring the alliance and rivalry relationships among militant groups with connections to suicide attack groups. Access it here.

Credit Card Dataset – Survey of Consumer Finances (SCF) Combined Extract Data 1989-2019

 You can do a lot of aggregated analysis in a pretty straightforward way there.

Drone imagery with annotations for small object detection and tracking dataset

11 TB dataset of drone imagery with annotations for small object detection and tracking

Download and more information are available here

Dataset License: CDLA-Sharing-1.0

Helper scripts for accessing the dataset: DATASET.md

Dataset Exploration: Colab

NOAA High-Resolution Rapid Refresh (HRRR) Model

The HRRR is a NOAA real-time 3-km resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model, initialized by 3km grids with 3km radar assimilation. Radar data is assimilated in the HRRR every 15 min over a 1-h period adding further detail to that provided by the hourly data assimilation from the 13km radar-enhanced Rapid Refresh.

Registry of Open Data on AWS

This registry exists to help people discover and share datasets that are available via AWS resources. Learn more about sharing data on AWS.

See all usage examples for datasets listed in this registry.

See datasets from Digital Earth AfricaFacebook Data for GoodNASA Space Act AgreementNIH STRIDESNOAA Big Data ProgramSpace Telescope Science Institute, and Amazon Sustainability Data Initiative.

Textbook Question Answering (TQA)

1,076 textbook lessons, 26,260 questions, 6229 images

Documentation: https://allenai.org/data/tqa

Download

Harmonized Cancer Datasets: Genomic Data Commons Data Portal

The GDC Data Portal is a robust data-driven platform that allows cancer
researchers and bioinformaticians to search and download cancer data for analysis.

Genomic Data Commons Data Portal
Genomic Data Commons Data Portal

The Cancer Genome Atlas

The Cancer Genome Atlas (TCGA), a collaboration between the National Cancer Institute (NCI) and National Human Genome Research Institute (NHGRI), aims to generate comprehensive, multi-dimensional maps of the key genomic changes in major types and subtypes of cancer.

AWS CLI Access (No AWS account required)

aws s3 ls s3://tcga-2-open/ --no-sign-request

Therapeutically Applicable Research to Generate Effective Treatments (TARGET)

The Therapeutically Applicable Research to Generate Effective Treatments (TARGET) program applies a comprehensive genomic approach to determine molecular changes that drive childhood cancers. The goal of the program is to use data to guide the development of effective, less toxic therapies. TARGET is organized into a collaborative network of disease-specific project teams.  TARGET projects provide comprehensive molecular characterization to determine the genetic changes that drive the initiation and progression of childhood cancers. The dataset contains open Clinical Supplement, Biospecimen Supplement, RNA-Seq Gene Expression Quantification, miRNA-Seq Isoform Expression Quantification, miRNA-Seq miRNA Expression Quantification data from Genomic Data Commons (GDC), and open data from GDC Legacy Archive. Access it here.

Genome Aggregation Database (gnomAD)

The Genome Aggregation Database (gnomAD) is a resource developed by an international coalition of investigators that aggregates and harmonizes both exome and genome data from a wide range of large-scale human sequencing projects. The summary data provided here are released for the benefit of the wider scientific community without restriction on use. Downloads

SQuAD (Stanford Question Answering Dataset)

Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Access it here.

PubMed Diabetes Dataset

The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details.

Download Link

Drug-Target Interaction Dataset

This dataset contains interactions between drugs and targets collected from DrugBank, KEGG Drug, DCDB, and Matador. It was originally collected by Perlman et al. It contains 315 drugs, 250 targets, 1,306 drug-target interactions, 5 types of drug-drug similarities, and 3 types of target-target similarities. Drug-drug similarities include Chemical-based, Ligand-based, Expression-based, Side-effect-based, and Annotation-based similarities. Target-target similarities include Sequence-based, Protein-protein interaction network-based, and Gene Ontology-based similarities. The original task on the dataset is to predict new interactions between drugs and targets based on different types of similarities in the network. Download link

Pharmacogenomics Datasets

PharmGKB data and knowledge is available as downloads. It is often critical to check with their curators at feedback@pharmgkb.org before embarking on a large project using these data, to be sure that the files and data they make available are being interpreted correctly. PharmGKB generally does NOT need to be a co-author on such analyses; They just want to make sure that there is a correct understanding of our data before lots of resources are spent.

Pancreatic Cancer Organoid Profiling

The dataset contains open RNA-Seq Gene Expression Quantification data and controlled WGS/WXS/RNA-Seq Aligned Reads, WXS Annotated Somatic Mutation, WXS Raw Somatic Mutation, and RNA-Seq Splice Junction Quantification. Documentation

AWS CLI Access (No AWS account required)

aws s3 ls s3://gdc-organoid-pancreatic-phs001611-2-open/ --no-sign-request

Africa Soil Information Service (AfSIS) Soil Chemistry

This dataset contains soil infrared spectral data and paired soil property reference measurements for georeferenced soil samples that were collected through the Africa Soil Information Service (AfSIS) project, which lasted from 2009 through 2018. Documentation

AWS CLI Access (No AWS account required)

aws s3 ls s3://afsis/ --no-sign-request

Dataset for Affective States in E-Environments

DAiSEE is the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration “in the wild”. The dataset has four levels of labels namely – very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. Download it here.

NatureServe Explorer Dataset

NatureServe Explorer provides conservation status, taxonomy, distribution, and life history information for more than 95,000 plants and animals in the United States and Canada, and more than 10,000 vegetation communities and ecological systems in the Western Hemisphere.

The data available through NatureServe Explorer represents data managed in the NatureServe Central Databases. These databases are dynamic, being continually enhanced and refined through the input of hundreds of natural heritage program scientists and other collaborators. NatureServe Explorer is updated from these central databases to reflect information from new field surveys, the latest taxonomic treatments and other scientific publications, and new conservation status assessments. Explore Data here

Flight Records in the US

Airline On-Time Performance and Causes of Flight Delays – On_Time Data.

This database contains scheduled and actual departure and arrival times, reason of delay. reported by certified U.S. air carriers that account for at least one percent of domestic scheduled passenger revenues. The data is collected by the Office of Airline Information, Bureau of Transportation Statistics (BTS).

FlightAware.com has data but you need to pay for a full dataset.

The anyflights package supplies a set of functions to generate air travel data (and data packages!) similar to nycflights13. With a user-defined year and airport, the anyflights function will grab data on:

  • flights: all flights that departed a given airport in a given year and month
  • weather: hourly meterological data for a given airport in a given year and month
  • airports: airport names, FAA codes, and locations
  • airlines: translation between two letter carrier (airline) codes and names
  • planes: construction information about each plane found in flights

Airline On-Time Statistics and Delay Causes

The U.S. Department of Transportation’s (DOT) Bureau of Transportation Statistics (BTS) tracks the on-time performance of domestic flights operated by large air carriers. Summary information on the number of on-time, delayed, canceled and diverted flights appears in DOT’s monthly Air Travel Consumer Report, published about 30 days after the month’s end, as well as in summary tables posted on this website. BTS began collecting details on the causes of flight delays in June 2003. Summary statistics and raw data are made available to the public at the time the Air Travel Consumer Report is released. Access it here

Worldwide flight data

Open flights: As of January 2017, the OpenFlights Airports Database contains over 10,000 airports, train stations and ferry terminals spanning the globe

Download: airports.dat (Airports only, high quality)

Download: airports-extended.dat (Airports, train stations and ferry terminals, including user contributions)

Bureau of Transportation:

Flightera.net seems to have a lot of good data for free. It has in-depth data on flights and doesn’t seem limited by date. I can’t speak on the validity of the data though.

flightradar24.com has lots of data, also historically, they might be willing to help you get it in a nice format.

2019 Crime statistics in the USA

Dataset with arrest in US by race and separate states. Download Excel here

Yahoo Answers DataSets

Yahoo is shutting down in 2021. This is Yahoo Answers datasets (300MB gzip) that is fairly extensive from 2015 with about 1.4m rows. This dataset has the best questions answers, I mean all the answers, including the most insane awful answers and the worst questions people put together. Download it here.

Another option here: According to the tracker, there are 77M done, 20M out(?), and 40M to go:

https://wiki.archiveteam.org/index.php/Yahoo!_Answers

History of America 1400-2021

Sources:

https://os-connect.com/pop/p2an.asp

https://ourworldindata.org/

http://www.ggdc.net/maddison/oriindex.htm

https://www.globalfirepower.com/countries-comparison.asp

Persian words phonetics dataset

This is a dataset of about 55K Persian words with their phonetics. Each word is in a line and separated from its phonetic by a tab. Download it here

Historical Air Quality Dataset

Air Quality Data Collected at Outdoor Monitors Across the US. This is a BigQuery Dataset. There are no files to download, but you can query it through Kernels using the BigQuery API. The AQS Data Mart is a database containing all of the information from AQS. It has every measured value the EPA has collected via the national ambient air monitoring program. It also includes the associated aggregate values calculated by EPA (8-hour, daily, annual, etc.). The AQS Data Mart is a copy of AQS made once per week and made accessible to the public through web-based applications. The intended users of the Data Mart are air quality data analysts in the regulatory, academic, and health research communities. It is intended for those who need to download large volumes of detailed technical data stored at EPA and does not provide any interactive analytical tools. It serves as the back-end database for several Agency interactive tools that could not fully function without it: AirData, AirCompare, The Remote Sensing Information Gateway, the Map Monitoring Sites KML page, etc.

Stack Exchange Dataset

https://data.stackexchange.com/

Awesome Public Datasets

This list of a topic-centric public data sources in high quality. They are collected and tidied from blogs, answers, and user responses. Most of the data sets listed below are free, however, some are not.

Agriculture Dataset

Biology Dataset

Climate and Weather Dataset

Complex Network Dataset

Computer Network Dataset

CyberSecurity Dataset

Data Challenges Dataset

Earth Science Dataset

Economics Dataset

Education Dataset

Energy Dataset

Entertainment Dataset

Finance Dataset

GIS Dataset

Government Dataset

Healthcare Dataset

Image Processing Dataset

Machine Learning Dataset

Museums Dataset

Natural Language Dataset

Neuroscience Dataset

Physics Dataset

Prostate Cancer Dataset

Psychology and Cognition Dataset

Public Domains Dataset

Search Engines Dataset

Social Networks Dataset

Social Sciences Dataset

Software Dataset

Sports Dataset

Time Series Dataset

Transportation Dataset

eSports Dataset

Complementary Collections

Categorized list of public datasets: Sindre Sorhus /awesome List

Platforms

  • Node.js – Async non-blocking event-driven JavaScript runtime built on Chrome’s V8 JavaScript engine.
  • Frontend Development
  • iOS – Mobile operating system for Apple phones and tablets.
  • Android – Mobile operating system developed by Google.
  • IoT & Hybrid Apps
  • Electron – Cross-platform native desktop apps using JavaScript/HTML/CSS.
  • Cordova – JavaScript API for hybrid apps.
  • React Native – JavaScript framework for writing natively rendering mobile apps for iOS and Android.
  • Xamarin – Mobile app development IDE, testing, and distribution.
  • Linux
    • Containers
    • eBPF – Virtual machine that allows you to write more efficient and powerful tracing and monitoring for Linux systems.
    • Arch-based Projects – Linux distributions and projects based on Arch Linux.
  • macOS – Operating system for Apple’s Mac computers.
  • watchOS – Operating system for the Apple Watch.
  • JVM
  • Salesforce
  • Amazon Web Services
  • Windows
  • IPFS – P2P hypermedia protocol.
  • Fuse – Mobile development tools.
  • Heroku – Cloud platform as a service.
  • Raspberry Pi – Credit card-sized computer aimed at teaching kids programming, but capable of a lot more.
  • Qt – Cross-platform GUI app framework.
  • WebExtensions – Cross-browser extension system.
  • RubyMotion – Write cross-platform native apps for iOS, Android, macOS, tvOS, and watchOS in Ruby.
  • Smart TV – Create apps for different TV platforms.
  • GNOME – Simple and distraction-free desktop environment for Linux.
  • KDE – A free software community dedicated to creating an open and user-friendly computing experience.
  • .NET
    • Core
    • Roslyn – Open-source compilers and code analysis APIs for C# and VB.NET languages.
  • Amazon Alexa – Virtual home assistant.
  • DigitalOcean – Cloud computing platform designed for developers.
  • Flutter – Google’s mobile SDK for building native iOS and Android apps from a single codebase written in Dart.
  • Home Assistant – Open source home automation that puts local control and privacy first.
  • IBM Cloud – Cloud platform for developers and companies.
  • Firebase – App development platform built on Google Cloud Platform.
  • Robot Operating System 2.0 – Set of software libraries and tools that help you build robot apps.
  • Adafruit IO – Visualize and store data from any device.
  • Cloudflare – CDN, DNS, DDoS protection, and security for your site.
  • Actions on Google – Developer platform for Google Assistant.
  • ESP – Low-cost microcontrollers with WiFi and broad IoT applications.
  • Deno – A secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
  • DOS – Operating system for x86-based personal computers that was popular during the 1980s and early 1990s.
  • Nix – Package manager for Linux and other Unix systems that makes package management reliable and reproducible.

Programming Languages

  • JavaScript
  • Swift – Apple’s compiled programming language that is secure, modern, programmer-friendly, and fast.
  • Python – General-purpose programming language designed for readability.
    • Asyncio – Asynchronous I/O in Python 3.
    • Scientific Audio – Scientific research in audio/music.
    • CircuitPython – A version of Python for microcontrollers.
    • Data Science – Data analysis and machine learning.
    • Typing – Optional static typing for Python.
    • MicroPython – A lean and efficient implementation of Python 3 for microcontrollers.
  • Rust
  • Haskell
  • PureScript
  • Go
  • Scala
    • Scala Native – Optimizing ahead-of-time compiler for Scala based on LLVM.
  • Ruby
  • Clojure
  • ClojureScript
  • Elixir
  • Elm
  • Erlang
  • Julia – High-level dynamic programming language designed to address the needs of high-performance numerical analysis and computational science.
  • Lua
  • C
  • C/C++ – General-purpose language with a bias toward system programming and embedded, resource-constrained software.
  • R – Functional programming language and environment for statistical computing and graphics.
  • D
  • Common Lisp – Powerful dynamic multiparadigm language that facilitates iterative and interactive development.
  • Perl
  • Groovy
  • Dart
  • Java – Popular secure object-oriented language designed for flexibility to “write once, run anywhere”.
  • Kotlin
  • OCaml
  • ColdFusion
  • Fortran
  • PHP – Server-side scripting language.
  • Pascal
  • AutoHotkey
  • AutoIt
  • Crystal
  • Frege – Haskell for the JVM.
  • CMake – Build, test, and package software.
  • ActionScript 3 – Object-oriented language targeting Adobe AIR.
  • Eta – Functional programming language for the JVM.
  • Idris – General purpose pure functional programming language with dependent types influenced by Haskell and ML.
  • Ada/SPARK – Modern programming language designed for large, long-lived apps where reliability and efficiency are essential.
  • Q# – Domain-specific programming language used for expressing quantum algorithms.
  • Imba – Programming language inspired by Ruby and Python and compiles to performant JavaScript.
  • Vala – Programming language designed to take full advantage of the GLib and GNOME ecosystems, while preserving the speed of C code.
  • Coq – Formal language and environment for programming and specification which facilitates interactive development of machine-checked proofs.
  • V – Simple, fast, safe, compiled language for developing maintainable software.

Front-End Development

Back-End Development

  • Flask – Python framework.
  • Docker
  • Vagrant – Automation virtual machine environment.
  • Pyramid – Python framework.
  • Play1 Framework
  • CakePHP – PHP framework.
  • Symfony – PHP framework.
  • Laravel – PHP framework.
    • Education
    • TALL Stack – Full-stack development solution featuring libraries built by the Laravel community.
  • Rails – Web app framework for Ruby.
    • Gems – Packages.
  • Phalcon – PHP framework.
  • Useful .htaccess Snippets
  • nginx – Web server.
  • Dropwizard – Java framework.
  • Kubernetes – Open-source platform that automates Linux container operations.
  • Lumen – PHP micro-framework.
  • Serverless Framework – Serverless computing and serverless architectures.
  • Apache Wicket – Java web app framework.
  • Vert.x – Toolkit for building reactive apps on the JVM.
  • Terraform – Tool for building, changing, and versioning infrastructure.
  • Vapor – Server-side development in Swift.
  • Dash – Python web app framework.
  • FastAPI – Python web app framework.
  • CDK – Open-source software development framework for defining cloud infrastructure in code.
  • IAM – User accounts, authentication and authorization.
  • Chalice – Python framework for serverless app development on AWS Lambda.

Computer Science

Big Data

  • Big Data
  • Public Datasets
  • Hadoop – Framework for distributed storage and processing of very large data sets.
  • Data Engineering
  • Streaming
  • Apache Spark – Unified engine for large-scale data processing.
  • Qlik – Business intelligence platform for data visualization, analytics, and reporting apps.
  • Splunk – Platform for searching, monitoring, and analyzing structured and unstructured machine-generated big data in real-time.

Theory

Books

Editors

Gaming

Development Environment

Entertainment

Databases

  • Database
  • MySQL
  • SQLAlchemy
  • InfluxDB
  • Neo4j
  • MongoDB – NoSQL database.
  • RethinkDB
  • TinkerPop – Graph computing framework.
  • PostgreSQL – Object-relational database.
  • CouchDB – Document-oriented NoSQL database.
  • HBase – Distributed, scalable, big data store.
  • NoSQL Guides – Help on using non-relational, distributed, open-source, and horizontally scalable databases.
  • Contexture – Abstracts queries/filters and results/aggregations from different backing data stores like ElasticSearch and MongoDB.
  • Database Tools – Everything that makes working with databases easier.
  • Grakn – Logical database to organize large and complex networks of data as one body of knowledge.

Media

Learn

Security

Content Management Systems

  • Umbraco
  • Refinery CMS – Ruby on Rails CMS.
  • Wagtail – Django CMS focused on flexibility and user experience.
  • Textpattern – Lightweight PHP-based CMS.
  • Drupal – Extensible PHP-based CMS.
  • Craft CMS – Content-first CMS.
  • Sitecore – .NET digital marketing platform that combines CMS with tools for managing multiple websites.
  • Silverstripe CMS – PHP MVC framework that serves as a classic or headless CMS.

Hardware

Business

Work

Networking

Decentralized Systems

  • Bitcoin – Bitcoin services and tools for software developers.
  • Ripple – Open source distributed settlement network.
  • Non-Financial Blockchain – Non-financial blockchain applications.
  • Mastodon – Open source decentralized microblogging network.
  • Ethereum – Distributed computing platform for smart contract development.
  • Blockchain AI – Blockchain projects for artificial intelligence and machine learning.
  • EOSIO – A decentralized operating system supporting industrial-scale apps.
  • Corda – Open source blockchain platform designed for business.
  • Waves – Open source blockchain platform and development toolset for Web 3.0 apps and decentralized solutions.
  • Substrate – Framework for writing scalable, upgradeable blockchains in Rust.

Higher Education

  • Computational Neuroscience – A multidisciplinary science which uses computational approaches to study the nervous system.
  • Digital History – Computer-aided scientific investigation of history.
  • Scientific Writing – Distraction-free scientific writing with Markdown, reStructuredText and Jupyter notebooks.

Events

Testing

  • Testing – Software testing.
  • Visual Regression Testing – Ensures changes did not break the functionality or style.
  • Selenium – Open-source browser automation framework and ecosystem.
  • Appium – Test automation tool for apps.
  • TAP – Test Anything Protocol.
  • JMeter – Load testing and performance measurement tool.
  • k6 – Open-source, developer-centric performance monitoring and load testing solution.
  • Playwright – Node.js library to automate Chromium, Firefox and WebKit with a single API.
  • Quality Assurance Roadmap – How to start & build a career in software testing.

Miscellaneous

Related

US Department of Education CRDC Dataset

The US Department of Ed has a dataset called the CRDC that collects data from all the public schools in the US and has demographic, academic, financial and all sorts of other fun data points. They also have corollary datasets that use the same identifier—an expansion pack if you may. It comes out every 2-3 years. Access it here

Nasa Dataset: sequencing data from bacteria before and after being taken to space

NASA has some sequencing data from bacteria before and after being taken to space, to look at genetic differences caused by lack of gravity, radiation and others. Very fun if you want to try your hand at some bio data science. Access it here.

All Trump’s twitter insults from 2015 to 2021 in CSV.

Extracted from the NYT story: here

Data is plural

Data is Plural is a really good newsletter published by Jeremy Singer-Vine. The datasets are very random, but super interesting. Access it here.

Global terrorism database

 Huge list of terrorism incidents from inside the US and abroad. Each entry has date and location of the incident, motivations, whether people or property were lost, the size of the attack, type of attack, etc. Access it here

Terrorist Attacks Dataset: This dataset consists of 1293 terrorist attacks each assigned one of 6 labels indicating the type of the attack. Each attack is described by a 0/1-valued vector of attributes whose entries indicate the absence/presence of a feature. There are a total of 106 distinct features. The files in the dataset can be used to create two distinct graphs. The README file in the dataset provides more details. Download Link:

Terrorists: This dataset contains information about terrorists and their relationships. This dataset was designed for classification experiments aimed at classifying the relationships among terrorists. The dataset contains 851 relationships, each described by a 0/1-valued vector of attributes where each entry indicates the absence/presence of a feature. There are a total of 1224 distinct features. Each relationship can be assigned one or more labels out of a maximum of four labels making this dataset suitable for multi-label classification tasks. The README file provides more details. Download Link

The dolphin social network

This network dataset is in the category of Social Networks. A social network of bottlenose dolphins. The dataset contains a list of all of links, where a link represents frequent associations between dolphins. Access it here

Dataset of 200,000 jokes

There are about 208 000 jokes in this database scraped from three sources.

Access it here:

The Million Song Dataset

The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks.

Its purposes are:

  • To encourage research on algorithms that scale to commercial sizes
  • To provide a reference dataset for evaluating research
  • As a shortcut alternative to creating a large dataset with APIs (e.g. The Echo Nest’s)
  • To help new researchers get started in the MIR field

Cornell University’s eBird dataset

Decades of observations of birds all around the world, truly an impressive way to leverage citizen science. Access it here.

UFO Report Dataset

NUFORC geolocated and time standardized ufo reports for close to a century of data. 80,000 plus reports. Access it here

CDC’s Trend Drug Data

The CDC has a public database called NAMCS/NHAMCS that allows you to trend drug data. It has a lot of other data points so it can be used for a variety of other reasons. Access it here.

Health and Retirement study: Public Survey data

A listing of publicly available biennial, off-year, and cross-year data products.

Example: COVID-19 Data

Year Product
2020 2020 HRS COVID-19 Project

RAND HRS Data

HRS data products produced by the RAND Center for the Study of Aging.

Gateway Harmonized Data

HRS data products produced by the USC Program on Global Aging, Health, and Policy.

Contributed and Replication Data

Data products (unsupported by the HRS) provided by researchers sharing their work.

Restricted/Sensitive Data

Cognition Data

A summary of HRS cognition data, including the new Harmonized Cognition Assessment Protocol (HCAP.)

Biomarker and Health Data

Sensitive health data files available are from the public data portal after a supplemental agreement is signed.

Restricted Data

HRS restricted data files require a detailed application process, and are available only through remote virtual desktop or encrypted physical media.

Administrative Linkages

Links HRS data with Medicare and Social Security.

Genetic Data

Genetic data products derived from 20,000 genotyped HRS respondents.

The Quick Draw Dataset

The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. Access it here.

Air Quality Dataset

The AirNow API replaces the previous AirNow Gateway web services. It includes file outputs and RSS data feeds. AirNow Gateway users can use their existing login information to access the new AirNow API web pages and web services. Access to the AirNow API is generally available to the public, and new accounts can be acquired via the Log In page

UK Water Industry Chemical Investigations dataset

Search and extract the measurements from 600 Wastewater Treatment Sites owned and operated by UK Water Companies and part of the Chemical Investigations Programme (CIP2).

M3 and M4 Dataset Time Series Data

The 3003 time series of the M3-Competition.

The M4 competition which is a continuation of the Makridakis Competitions for forecasting and was conducted in 2018. This competion includes the prediction of both Point Forecasts and Prediction Intervals.

Protein Data Bank (PDB)

Used by Google’s deep-learning program for determining the 3D shapes of proteins stands to transform biology, say scientists. Access it here.

Dataset of Games

In computer science, Artificial Intelligence (AI) is intelligence demonstrated by machines. Its definition, AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that achieving its goals Russell et. al (2016).

Withal, Data Mining (DM) is the process of discovering patterns in data sets (or datasets) involving methods of machine learning, statistics, and database systems; DM focus on extract the information of datasets Han (2011).

This repository serves as a guide for anyone who wants to work with Artificial Intelligence or Data Mining applied in digital games! Here you will find a series of datasets, tools and materials available to build your application or dataset. Access it here.

DonorsChoose.org Application Screening DataSet

Help Predict whether teachers’ project proposals are accepted

Dataset of all the squirrels in Central Park

The Squirrel Census is a multimedia science, design, and storytelling project focusing on the Eastern gray (Sciurus carolinensis). They count squirrels and present their findings to the public.

Google BigQuery Public Datasets

BigQuery public datasets are made available without any restrictions to all Google Cloud users. Google pays for the storage of these datasets. You can use them to learn how to work with BigQuery or even build your application on top of them, exactly as we’re going to do.

IMDb Dataset

IMDb dataset importer – loads into a Marten DB document store. It imports the public datasets into a database, and provides repositories for querying. The total imported size is about 40 million rows, and 14 gigabytes on disk!

PHOnA: A Public Dataset of Measured Headphone Transfer Functions

A dataset of measured headphone transfer functions (HpTFs), the Princeton Headphone Open Archive (PHOnA), is presented. Extensive studies of HpTFs have been conducted for the past twenty years, each requiring a separate set of measurements, but this data has not yet been publicly shared. PHOnA aggregates HpTFs from different laboratories, including measurements for multiple different headphones, subjects, and repositionings of headphones for each subject. The dataset uses the spatially oriented format for acoustics (SOFA), and SOFA conventions are proposed for efficiently storing HpTFs. PHOnA is intended to provide a foundation for machine learning techniques applied to HpTF equalization. This shared data will allow optimization of equalization algorithms to provide more universal solutions to perceptually transparent headphone reproduction. Access it here.

Sports Data Set

Provide both basic and sabermetric statistics and resources for sports fans everywhere. Access here

Kaggle DataSets

Explore, analyze, and share quality data here

Coronavirus Datasets

Spreadsheets and Datasets:

Natural History Museum in London

The Natural History Museum in London has 80 million items (and counting!) in its collections, from the tiniest specks of stardust to the largest animal that ever lived – the blue whale. 

The Digital Collections Programme is a project to digitise these specimens and give the global scientific community access to unrivalled historical, geographic and taxonomic specimen data gathered in the last 250 years. Mobilising this data can facilitate research into some of the most pressing scientific and societal challenges.

Digitising involves creating a digital record of a specimen which can consist of all types of information such as images, and geographical and historical information about where and when a specimen was collected. The possibilities for digitisation are quite literally limitless – as technology evolves, so do possible uses and analyses of the collections. We are currently exploring how machine learning and automation can help us capture information from specimen images and their labels.

With such a wide variety of specimens, digitising looks different for every single collection. How we digitise a fly specimen on a microscope slide is very different to how we might digitise a bat in a spirit jar! We develop new workflows in response to the type of specimens we are dealing with. Sometimes we have to get really creative, and have even published on workflows which have involved using pieces of LEGO to hold specimens in place while we are imaging them.

Mobilising this data and making it open access is at the heart of the project. All of the specimen data is released on our Data Portal, and we also feed the data into international databases such as GBIF.

TSA Throughput Dataset (alternate source)

The TSA has is publishing more and more data via it’s Freedom of Information Act (FOIA) Reading Room.  This project on github https://github.com/mikelor/tsathroughput  contains the source for extracting the information from the .PDF files and converts them to JSON and CSV files.

The /data folder contains the source .PDFs going back to 2018 while the /data/raw/tsa/throughput folder contains .json files.

Data Planet

The largest repository of standardized and structured statistical data

https://statisticaldatasets.data-planet.com/

Chess datasets

3.5 Million Chess Games

ML Dataset to practice methods of regression

Center for Machine Learning and Intelligent Systems

585 Data Sets

 

ManyTypes4Py: A benchmark Python Dataset for Machine Learning-Based Type Inference

  • The dataset is gathered on Sep. 17th 2020 from GitHub.
  • It has more than 5.2K Python repositories and 4.2M type annotations.
  • Use it to train  ML-based type inference model for Python
  • Access it here

Quadrature magnetoresistance in overdoped cuprates

Measurements of the normal (i.e. non-superconducting) state magnetoresistance (change in resistance with magnetic field) in several single crystalline samples of copper-oxide high-temperature superconductors. The measurements were performed predominantly at the High Field Magnet Laboratory (HFML) in Nijmegen, the Netherlands, and the Pulsed Magnetic Field Facility (LNCMI-T) in Toulouse, France. Complete Zip Download

The UMA-SAR Dataset: Multimodal data collection from a ground vehicle during outdoor disaster response training exercises

Collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. Full Dataset.

Child Mortality from Malaria

Child mortality numbers caused by malaria by country

Number of deaths of infants, neonatal, and children up to 4 years old caused by malaria by country from 2000 to 2015. Originator: World Health Organization

https://datarepository.wolframcloud.com/resources/Child-Mortality-Numbers-by-Malaria-2015

Quora Question Pairs at Data.world

The dataset  will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair. Access it here.

MIMIC Critical Care Database

MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising deidentified health data associated with ~60,000 intensive care unit admissions. It includes demographics, vital signs, laboratory tests, medications, and more. Access it here.

Data.Gov: The home of the U.S. Government’s open data

Here you will find data, tools, and resources to conduct research, develop web and mobile applications, design data visualizations, and more. Search over 280000 Datasets.

Tidy Tuesday Dataset

TidyTuesday is built around open datasets that are found in the “wild” or submitted as Issues on our GitHub.

US Census Bureau: QuickFacts Dataset

QuickFacts provides statistics for all states and counties, and for cities and towns with a population of 5,000 or more.

Classical Abstract Art Dataset

Art that does not attempt to represent an accurate depiction of a visual reality but instead use shapes, colours, forms and gestural marks to achieve its effect

5000+ classical abstract art here, real artists with annotation. You can download them in very high resolution,  however you would have to crawl them first  with this scraper.

Interactive map of indigenous people around the world

Native-Land.ca is a website run by the nonprofit organization Native Land Digital. Access it here.

Data Visualization: A Wordcloud for each of the Six Largest Religions and their Religious Texts (Judaism, Christianity, and Islam; Hinduism, Buddhism, and Sikhism)

Highest altitude humans have been each year since 1961

DataOhio

Over 200+ public datasets, including COVID data. Access it here.

Ohio Data, Ohio Insights. The DataOhio catalog is a single source for the most critical and relevant datasets from state agencies and entities.

https://data.ohio.gov/wps/portal/gov/data/view/view-all

National Household Travel Survey (US)

Conducted by the Federal Highway Administration (FHWA), the NHTS is the authoritative source on the travel behavior of the American public. It is the only source of national data that allows one to analyze trends in personal and household travel. It includes daily non-commercial travel by all modes, including characteristics of the people traveling, their household, and their vehicles. Access it here.

National Travel Survey (UK)

Statistics and data about the National Travel Survey, based on a household survey to monitor trends in personal travel.

The survey collects information on how, why, when and where people travel as well as factors affecting travel (e.g. car availability and driving license holding).

National Travel Survey data tables UK
National Travel Survey data tables UK

National Travel Survey (NTS)[Canada]

Monthly Railway Carloadings: Interactive Dashboard
Monthly Railway Carloadings: Interactive Dashboard

ENTUR: NeTEx or GTFS datasets [Norway]

NeTEx is the official format for public transport data in Norway and is the most complete in terms of available data. GTFS is a downstream format with only a limited subset of the total data, but we generate datasets for it anyway since GTFS can be easier to use and has a wider distribution among international public transport solutions. GTFS sets come in “extended” and “basic” versions. Access here.

The Swedish National Forest Inventory

A subset of the field data collected on temporary NFI plots can be downloaded in Excel format from this web site. The file includes a Read_me sheet and a sheet with field data from temporary plots on forest land1 collected from 2007 to 2019. Note that plots located on boundaries (for example boundaries between forest stands, or different land use classes) are not included in the dataset. The dataset is primarily intended to be used as reference data and validation data in remote sensing applications. It cannot be used to derive estimates of totals or mean values for a geographic area of any size. Download the dataset here

Large data sets from finance and economics applicable in related fields studying the human condition

World Bank Data: Countries Data | Topics Data | Indicators Data | Catalog

US Federal Statistics

Boards of Governors of the Federal Reserve: Data Download Program

CIA: The world Factbook provides basic intelligence on the history, people, government, economy, energy, geography, environment, communications, transportation, military, terrorism, and transnational issues for 266 world entities.

Human Development Report: United Nations Development Programme – Public Data Explorer

Consumer Price Index: The Consumer Price Index (CPI) is a measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services. Indexes are available for the U.S. and various geographic areas. Average price data for select utility, automotive fuel, and food items are also available.

Gapminder.org: Unveiling the beauty of statistics for a fact based world view Watch everyday life in hundreds of homes on all income levels across the world, to counteract the media’s skewed selection of images of other places.

Our world in Data: International Trade

Research and data to make progress against the world’s largest problems: 3139 charts across 297 topics, All free: open access and open source.

International Historical Statistics (by Brian Mitchell)

 
International Historical Statistics is a compendium of national and international socio-economic data from 1750 to 2010. Data are available in both Excel and PDF tabular formats. IHS is structured in three broad geographical divisions and ten themes: Africa / Asia / Oceania; The Americas and Europe. The database is structured in ten categories: Population and vital statistics; Labour force; Agriculture; Industry; External trade; Transport and communications; Finance; Commodity prices; Education and National accounts. Access here

World Input-Output Database

World Input-Output Tables and underlying data. World Input-Output Tables and underlying data, covering 43 countries, and a model for the rest of the world for the period 2000-2014. Data for 56 sectors are classified according to the International Standard Industrial Classification revision 4 (ISIC Rev. 4).

  • Data: Real and PPP-adjusted GDP in US millions of dollars, national accounts (household consumption, investment, government consumption, exports and imports), exchange rates and population figures.
  • Geographical coverage: Countries around the world
  • Time span: from 1950-2011 (version 8.1)
  • Available at: Online

Correlates of War Bilateral Trade

COW seeks to facilitate the collection, dissemination, and use of accurate and reliable quantitative data in international relations. Key principles of the project include a commitment to standard scientific principles of replication, data reliability, documentation, review, and the transparency of data collection procedures

  • Data: Total national trade and bilateral trade flows between states. Total imports and exports of each country in current US millions of dollars and bilateral flows in current US millions of dollars
  • Geographical coverage: Single countries around the world
  • Time span: from 1870-2009
  • Available at: Online here
  • This data set is hosted by Katherine Barbieri, University of South Carolina, and Omar Keshk, Ohio State University.

World Bank Open Data – World Development Indicators

Free and open access to global development data. Access it here.

World Trade Organization – WTO

The WTO provides quantitative information in relation to economic and trade policy issues. Its data-bases and publications provide access to data on trade flows, tariffs, non-tariff measures (NTMs) and trade in value added.

  • Data: Many series on tariffs and trade flows
  • Geographical coverage: Countries around the world
  • Time span: Since 1948 for some series
  • Available at: Online here
WTO - World Trade Organization
WTO – World Trade Organization

SMOKA Science Archive

The Subaru-Mitaka-Okayama-Kiso Archive, holds about 15 TB of astronomical data from facilities run by the National Astronomical Observatory of Japan. All data becomes publicly available after an embargo period of 12-24 months (to give the original observers time to publish their papers).

Graph Datasets

Multi-Domain Sentiment Dataset

The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains). Some domains (books and dvds) have hundreds of thousands of reviews. Others (musical instruments) have only a few hundred. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed. Access it here.

A Global Database of Society

Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, themes, sources, emotions, counts, quotes, images and events driving our global society every second of every day, creating a free open platform for computing on the entire world.

The Yahoo News Feed: Ratings and Classification Data

Dataset is 1.5 TB compressed, 13.5 TB uncompressed

Yahoo! Music User Ratings of Musical Artists, version 1.0 (423 MB)

This dataset represents a snapshot of the Yahoo! Music community’s preferences for various musical artists. The dataset contains over ten million ratings of musical artists given by Yahoo! Music users over the course of a one month period sometime prior to March 2004. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms. The dataset may serve as a testbed for matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 423 MB.
 

Yahoo! Movies User Ratings and Descriptive Content Information, v.1.0 (23 MB)

This dataset contains a small sample of the Yahoo! Movies community’s preferences for various movies, rated on a scale from A+ to F. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset also contains a large amount of descriptive information about many movies released prior to November 2003, including cast, crew, synopsis, genre, average ratings, awards, etc. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms, including hybrid content and collaborative filtering algorithms. The dataset may serve as a testbed for relational learning and data mining algorithms as well as matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 23 MB.
 

Yahoo News Video dataset, version 1.0 (645MB)

The dataset is a collection of 964 hours (22K videos) of news broadcast videos that appeared on Yahoo news website’s properties, e.g., World News, US News, Sports, Finance, and a mobile application during August 2017. The videos were either part of an article or displayed standalone in a news property. Many of the videos served in this platform lack important metadata, such as an exhaustive list of topics associated with the video. We label each of the videos in the dataset using a collection of 336 tags based on a news taxonomy designed by in-house editors. In the taxonomy, the closer the tag is to the root, the more generic (topically) it is.
etc…

Other Datasets

More than 1 TB

  • The 1000 Genomes project makes 260 TB of human genome data available
  • The Internet Archive is making an 80 TB web crawl available for research 
  • The TREC conference made the ClueWeb09 [3] dataset available a few years back. You’ll have to sign an agreement and pay a nontrivial fee (up to $610) to cover the sneakernet data transfer. The data is about 5 TB compressed.
  • ClueWeb12  is now available, as are the Freebase annotations, FACC1 
  • CNetS at Indiana University makes a 2.5 TB click dataset available 
  • ICWSM made a large corpus of blog posts available for their 2011 conference. You’ll have to register (an actual form, not an online form), but it’s free. It’s about 2.1 TB compressed. The dataset consists of over 386 million blog posts, news articles, classifieds, forum posts and social media content between January 13th and February 14th. It spans events such as the Tunisian revolution and the Egyptian protests (see http://en.wikipedia.org/wiki/January_2011 for a more detailed list of events spanning the dataset’s time period). Access it here
  • The Yahoo News Feed dataset is 1.5 TB compressed, 13.5 TB uncompressed
  • The Proteome Commons makes several large datasets available. The largest, the Personal Genome Project , is 1.1 TB in size. There are several others over 100 GB in size.

More than 1 GB

  • The Reference Energy Disaggregation Data Set  has data on home energy use; it’s about 500 GB compressed.
  • The Tiny Images dataset  has 227 GB of image data and 57 GB of metadata.
  • The ImageNet dataset  is pretty big.
  • The MOBIO dataset  is about 135 GB of video and audio data
  • The Yahoo! Webscope program makes several 1 GB+ datasets available to academic researchers, including an 83 GB data set of Flickr image features and the dataset used for the 2020 KDD Cup , from Yahoo! Music, which is a bit over 1 GB.
  • Freebase makes regular data dumps available. The largest is their Quad dump , which is about 3.6 GB compressed.
  • Wikipedia made a dataset containing information about edits available for a recent Kaggle competition [6]. The training dataset is about 2.0 GB uncompressed.
  • The Research and Innovative Technology Administration (RITA) has made available a dataset about the on-time performance of domestic flights operated by large carriers. The ASA compressed this dataset and makes it available for download.
  • The wiki-links data made available by Google is about 1.75 GB total.
  • Google Research released a large 24GB n-gram data set back in 2006 based on processing 10^12 words of text and published counts of all sequences up to 5 words in length.

Power and Energy Consumption Open Datasets

These data are intended to be used by researchers and other professionals working in power and energy related areas and requiring data for design, development, test, and validation purposes. These data should not be used for commercial purposes.

The Million Playlist Dataset (Spotify)

A dataset and open-ended challenge for music recommendation research ( RecSys Challenge 2018). Sampled from the over 4 billion public playlists on Spotify, this dataset of 1 million playlists consist of over 2 million unique tracks by nearly 300,000 artists, and represents the largest public dataset of music playlists in the world. Access it here

Regression Analysis Cheat Sheet

Hotel Reviews Dataset from Yelp

20k+ Hotel Reviews from Yelp for 5 Star Hotels in Las Vegas.

This dataset can be used for the following applications and more:

Analyzing trends,  Sentiment Analysis / Opinion Mining, Sentiment Analysis / Opinion Mining, Competitor Analysis. Access it here.

A truncated version with 500 reviews is also available on Kaggle here

Motorcycle Crash data

1- Texas: Perform specific queries and analysis using Texas traffic crash data.

2- BTS: Motorcycle Rider Safety Data

3- National Transportation Safety Board: US Transportation Fatalities in 2019

4- Fatal single vehicle motorcycle crashes

5- Motorcycle crash causes and outcomes : pilot study

6- Motorcycle Crash Causation Study: Final Report

Download a collection of news articles relating to natural disasters over an eight-month period. Access it here.

World Population Data by Country and Age Group

1- WorldoMeter: Countries in the world by population (2021)

2- Worldometer: Current World Population Live

Investment-Related Dataset with both Qualitative and Quantitative Variables

1- Numer.ai:  Anonymized and feature normalized financial data which is interesting for machine learning applications. Download here

2- Snowflake Data Marketplace: Snowflake Data Marketplace gives data scientists, business intelligence and analytics professionals, and everyone who desires data-driven decision-making, access to more than 375 live and ready-to-query data sets from more than 125 third-party data providers and data service providers

3- Quandl: The premier source for financial, economic and alternative datasets, serving investment professionals.

National Obesity Monitor

The National Health and Nutrition Examination Survey (NHANES) is conducted every two years by the National Center for Health Statistics and funded by the Centers for Disease Control and Prevention. The survey measures obesity rates among people ages 2 and older. Find the latest national data and trends over time, including by age group, sex, and race. Data are available through 2017-2018, with the exception of obesity rates for children by race, which are available through 2015-2016. Access here

State of Childhood Obesity
State of Childhood Obesity

The World’s Nations by Fertility Rate 2021

The world nation 's fertility rates
The world’s nations fertility rates

Total number of deaths due to Covid19 vis-à-vis Population in million

Total number of deaths due to Covid19 vis-à-vis Population in million
Total number of deaths due to Covid19 vis-à-vis Population in million

Google searches for different emotions during each hour of the day and night

Google searches for different emotions during each hour of the day and night
Google searches for different emotions during each hour of the day and night

Where do the world’s CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions

Where do the world's CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions
Where do the world’s CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions

Global Linguistic Diversity

Global Linguistic Diversity
Global Linguistic Diversity

Where in the world are the densest forests? Darker areas represent higher density of trees.

Where in the world are the densest forests? Darker areas represent higher density of trees.
Where in the world are the densest forests? Darker areas represent higher density of trees.

Likes and Dislikes per movie genre

Like and Dislike per movie genre
Like and Dislike per movie genre

Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset

NCEI first developed the Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset in the early 1990s. Subsequent iterations include version 2 in 1997, version 3 in May 2011, and version 4 in October 2018.

Are there any places where the climate is recently getting colder?
Are there any places where the climate is recently getting colder?

Python Cheat Sheet

Python Beginners Cheat Sheet

Data Sciences Cheat Sheet

Data Sciences Cheat Sheet

Panda Cheat Sheet

Pandas Cheat Sheet

Electric power consumption (kWh per capita)

The World’s Most Eco-Friendly Countries

Alternate Source from Wikipedia : List of countries by carbon dioxide emissions per capita

List of countries by carbon dioxide emissions per capita
List of countries by carbon dioxide emissions per capita

Worldwide CO2 Emission
Worldwide CO2 Emission

Alcohol-Impaired Driving Deaths by State & County [US]

Alcohol Impaired Driving by State
Alcohol Impaired Driving by State

Alcohol Impaired driving by counties
Alcohol Impaired driving by county

% change in life expectancy from 2020 to 2021 across the globe

% change in life expectancy from 2020 to 2021 across the globe
% change in life expectancy from 2020 to 2021 across the globe

This is how life expectancy is calculated.

How Many Years Till the World’s Reserves Run Out of Oil?

How Many Years Till the World's Reserves Run Out of Oil?
How Many Years Till the World’s Reserves Run Out of Oil?

Data Source Here: Note that these values can change with time based on the discovery of new reserves, and changes in annual production.

Which energy source has the least disadvantages?

How many People Did Nuclear Energy Kill?

Here’s a paper on the wind fatalities

https://www.ipcc.ch/site/assets/uploads/2018/02/07_figure_7.7-813×1024.png

Human development index (HDI) by world subdivisions

Human development index (HDI) by world subdivisions
Human development index (HDI) by world subdivisions

The Human Development Index (HDI) is a statistic composite index of life expectancy, education (mean years of schooling completed and expected years of schooling upon entering the education system), and per capita income indicators, which are used to rank countries into four tiers of human development.

Data sourcesubnational human development index website 

US Streaming Services Market Share, 2020 vs 2021

US Streaming Services Market Share, 2020 vs 2021
US Streaming Services Market Share, 2020 vs 2021

Number of tweets deleted by month

Number of tweets deleted by month in 2020
Number of tweets deleted by month in 2020

Tweet Deleter

Football/Soccer Leagues with the fairest distributions of money have seen the most growth in long-term global interest.

Football Leagues with the fairest distributions of money have seen the most growth in long-term global interest.
Football Leagues with the fairest distributions of money have seen the most growth in long-term global interest.

How Much Does Your Favorite Fast Food Brand Spend on Ads?

Sources:

https://www.statista.com/statistics/286541/mcdonald-s-advertising-spending-worldwide/

https://www.statista.com/statistics/306676/ad-spend-subway-usa/

https://www.statista.com/statistics/308930/dominos-pizza-advertising-spending-usa/

https://www.statista.com/statistics/306690/ad-spend-wednys-usa/

https://www.statista.com/statistics/306694/ad-spend-burger-king-usa/

https://www.statista.com/statistics/1072559/advertising-expense-chick-fil-a/

https://www.statista.com/statistics/275195/starbucks-advertising-spending-in-the-us

Historical population count of Western Europe

[OC] Historical population count of Western Europe from dataisbeautiful

Results from survey on how to best reduce your personal carbon footprint

Results from survey on how to best reduce your personal carbon footprint
Results from survey on how to best reduce your personal carbon footprint

Data from IpsosMori

Where does the world’s non-renewable energy come from? 

r/dataisbeautiful - Where does the world's non-renewable energy come from? Zoom in to see a point for each power plant! [OC]

The data comes from the Global Power Plant Database. The Global Power Plant Database is a comprehensive, open source database of power plants around the world. It centralizes power plant data to make it easier to navigate, compare and draw insights for one’s own analysis. The database covers approximately 30,000 power plants from 164 countries and includes thermal plants (e.g. coal, gas, oil, nuclear, biomass, waste, geothermal) and renewables (e.g. hydro, wind, solar). Each power plant is geolocated and entries contain information on plant capacity, generation, ownership, and fuel type. It will be continuously updated as data becomes available.

Recorded Music Industry Revenues from 1997 to 2020

[OC] Recorded Music Industry Revenues from 1997 to 2020 from dataisbeautiful

Source: https://www.riaa.com/

US Trade Surpluses and Deficits by Country (2020)

Facebook Monthly Active Users

Facebook data is based on the end of year from 2004 to 2020

Facebook monthly active users

Source: SeeMetrics.com

Heat map of the past 50,000 earthquakes pulled from USGS sorted by magnitude

[OC] This is a heat map of the past 50,000 earthquakes pulled from USGS sorted by magnitude. from dataisbeautiful

Source:  USGS website

Where do the world’s methane (CH4)emissions come from?

Darker areas indicate areas with higher emissions.

Where do the world’s methane (CH4)emissions come from? Darker areas indicate areas with higher emissions. [OC] from dataisbeautiful

Source: Data comes from EDGARv5.0 website and Crippa et al. (2019)

Earth Surface Albedo (1950 to 2020)

Data Source: ECMWF ERA5

Wealth of Forbes’ Top 100 Billionaires vs All Households in Africa

Sources:
Forbes’ 35th Annual World’s Billionaires List
Credit Suisse Global Wealth Report 2020
United Nations World Population Prospects

Forbes Billionaires list

United nations world population prospects

Credit Suisse Global Wealth Report 2020

20 years of Apple sales in a minute

Source: Apple’s quarterly and annual financial filings with the SEC over the last 20 years

Source: Wikipedia

Racial Diversity of Each State (Based on US Census 2019 Estimates)

r/dataisbeautiful - [OC] Racial Diversity of Each State (Based on US Census 2019 Estimates)

Computation:

Suppose your state is 60% orc, 30% undead, and 10% tauren. You chance in a random selection of two being of the same race is as follows:

  • 36% chance ((60%)2) of two orcs

  • 9% chance ((30%)2) of two undead

  • 1% chance ((10%)2) of two tauren

For a total of 46%. The diversity index would be 100% minus that, or 54%.

Race and Ethnicity in the US

A curated, daily feed of newly published datasets in machine learning

Machine Learning: CIFAR-10 Dataset

A curated, daily feed of newly published datasets in machine learning

The CIFAR-10 dataset consists of 60000 32×32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

Machine Learning: ImageNet

The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images.

Machine Learning: The MNIST Database of Handwritten Digits

The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. Access it here.

The Massively Multilingual Image Dataset (MMID)

MMID is a large-scale, massively multilingual dataset of images paired with the words they represent collected at the University of Pennsylvania. The dataset is doubly parallel: for each language, words are stored parallel to images that represent the word, and parallel to the word’s translation into English (and corresponding images.) . Dcumentation.

AWS CLI Access (No AWS account required)

aws s3 ls s3://mmid-pds/ --no-sign-request

AWS Azure Google Cloud Cloud Certification Exam Prep App
AWS Azure Google Cloud Cloud Certification Exam Prep App: AWS CCP Cloud Practitioner CLF-C01, AWS Solution Architect Associate SAA-C02, AWS Developer Associate DEV-C01, AWS DAS-C01, Azure Fundamentals AZ900, Azure Administrator AZ104, Google Associate Cloud Engineer, AWS Specialty Data Analytics DAS-C01, AWS and Google Professional Machine Learning Specialty MLS-C01

Capitol insurrection arrests per million people by state

[OC] Capitol insurrection arrests per million people by state from dataisbeautiful

How have cryptocurrencies done during the Pandemic?

[OC] How have cryptocurrencies done during the Pandemic? from dataisbeautiful

Data Source: Downloaded performance data on these cryptocurrencies from Investing.com which provides free historic data

Share of US Wealth by Generation

r/dataisbeautiful - Share of US Wealth by Generation [OC]

Source: US Federal Reserve

Top 100 Cryptocurrencies by Market Cap

Top 100 Cryptocurrencies by Market Cap

Data Source from https://coinmarketcap.com/

 Crypto race: DOGE vs BTC, last 365 days

[OC] Crypto race: DOGE vs BTC, last 365 days (now with axes and % gain annotated) from dataisbeautiful

Data sources: Coindesk BTC, Coindesk Dodge

 Yearly Performance of TOP 100 cryptocurrencies
Yearly Performance of TOP 100 cryptocurrencies

What if you bought $100 worth of X a year ago? [OC] from dataisbeautiful

12,000 years of human population dynamics

[OC] 12,000 years of human population dynamics v2.0 – slower & more frames from dataisbeautiful

Countries with a higher Human Development Index (HDI) than the European Union (EU)

HDI is calculated by the UN every year to measure a country’s development using average life expectancy, education level, and gross national income per capita (PPP). The EU has a collective HDI of 0.911.

Data Source: Here

Countries with a higher Human Development Index (HDI) than the United States (US)

Data source: Human Development Report 2020

Child marriage by country, by gender

Data on the percentage of children married before reaching adulthood (18 years).

Data source The State of the World’s Children 2019

 

Wars with greater than 25,000 deaths by year

[OC]Modern wars with greater than 25,000 deaths by year from dataisbeautiful

Data Source : Wikipedia

Population Projection for China and India till 2050

This graphic shows India’s population overtaking China
This graphic shows India’s population overtaking China

Data Source: Here

Relative cumulative and per capita CO2 emissions 1751-2017

 

Relative cumulative and per capita CO2 emissions 1751-2017
Relative cumulative and per capita CO2 emissions 1751-2017

Dat Source: https://ourworldindata.org

Formula 1 Cumulative Wins by Team (1950-2021)

[OC] – Formula 1 Cumulative Wins by Team (1950-2021) from dataisbeautiful

Data Source : https://www.f1-fansite.com/f1-results/

Countries with the most nuclear warheads. A couple of days ago I posted this with a logarithmic scale.

[OC] Countries with the most nuclear warheads. A couple of days ago I posted this with a logarithmic scale. A lot of people thought that was confusing, here is the linear scale. from dataisbeautiful

Data source: Wikipedia

Using machine learning methods to group NFL quarterbacks into archetypes

Using machine learning methods to group NFL quarterbacks into archetypes
Using machine learning methods to group NFL quarterbacks into archetypes

Data Source:

Data collected from a  series of rushing and passing statistics for NFL Quarterbacks from 2015-2020 and performed a machine learning algorithm called clustering, which automatically sorts observations into groups based on shared common characteristics using a mathematical “distance metric.”

The idea was to use machine learning to determine NFL Quarterback Archetype to agnostically determine which quarterbacks were truly “mobile” quarterbacks, and which were “pocket passers” that relied more on passing. I used a number of metrics in my actual clustering analysis, but they can be effectively summarized across two dimensions: passing and rushing, which can be further roughly summarized across two metrics: passer rating and rushing yards per year. Plotting the quarterbacks along these dimensions and plotting the groups chosen by the clustering methodology shows how cleanly the methodology selected the groups.

Read this blog article on the process for more information if you’re interested, or just check out this blog in general if you found this interesting!

Data: Collected from the ESPN API

2M rows of 1-min S&P bars (12 years of stock data) – 2008-2021

Intraday Stock Data (1 min) – S&P 500 – 2008-21: 12 years of 1 minute bars for data science / machine learning.

Granular stock bar data for research is difficult to find and expensive to buy. The author has compiled this library from a variety of sources and is making it available for free.

One compressed CSV file with 9 columns and 2.07 million rows worth of 1 minute SPY bars.  Access it here

A global database of COVID-19 vaccinations

Cumulative number of COVID-19 doses administered by country.
Cumulative number of COVID-19 doses administered by country.
COVID-19 vaccine doses administered per 100 people versus gross domestic product per capita.
COVID-19 vaccine doses administered per 100 people versus gross domestic product per capita.
Timeline of innovation in the development of vaccines.
Timeline of innovation in the development of vaccines.

Datasets: A live version of the vaccination dataset and documentation are available in a public GitHub repository here. These data can be downloaded in CSV and JSON formats. PDF.

 A list of available datasets for machine learning in manufacturing

Industrial ML Datasets: curated list of datasets, publicly available for machine learning researches in the area of manufacturing.

Predictive Maintenance and Condition Monitoring

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format
Diesel Engine Faults Features 2020 Signal 84 C (4) 3.500   Synthetic MAT Link

Process Monitoring

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format  
High Storage System Anomaly Detection 2018 Signal 20 C (2) 91.000   Synthetic CSV Link

Predictive Quality and Quality Inspection

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format
Casting Product Quality Inspection 2020 Image 300×300
512×512
C (2) 7.348 ✔️ Real JPG Link

Process Parameter Optimization

Name Year Feature Type Feature Count Instances Official Train/Test Split Data Source Format
Laser Welding 2020 Signal 13 361 Real XLS Link

Data Analytics Certification Questions and Answers Dumps

Datasets needed for Crop Disease Identification using image processing

Here is a collection of datasets with images of leaves

and more generic image datasets that include plant leaves

http://visualgenome.org/

http://image-net.org/

Plant Phenotyping

One hundreds plant species datasets

cvonline 

A Database of Leaf Images: Practice towards Plant Conservation with Plant Pathology

Survival Analysis datasets for machines

Survival Analysis datasets for machines

English alphabet organized by each letter’s note in ABC

English alphabet organised by each letter's note in ABC
English alphabet organized by each letter’s note in ABC

Discover datasets hosted in thousands of repositories across the Web using datasetsearch.research.google.com

#dataset #search   @Google

Create, maintain, and contribute to a long-living dataset that will update itself automatically across projects.

Datasets should behave like git repositories.

Image

Datasets should behave like git repositories
Datasets should behave like git repositories

Learn how to create, maintain, and contribute to a long-living dataset that will update itself automatically across projects, using git and DVC as versioning systems, and DAGsHub as a host for the datasets. 

Human Rights Measurement Initiative Datasets

Image

World Wide Energy Production by Source 1860 – 2019

[OC] World Wide Energy Production by Source 1860 – 2019 from dataisbeautiful

Data source: https://ourworldindata.org/energy

 Project Sunroof – Solar Electricity Generation Potential by Census Tract/Postal Code

 Courtesy of Google’s Project Sunroof: This dataset essentially describes the rooftop solar potential for different regions, based on Google’s analysis of Google Maps data to find rooftops where solar would work, and aggregate those into region-wide statistics.

It comes in a couple of aggregation flavors – by census tract , where the region name is the census tract id, and by postal code , where the name is the postal code. Each also contains latitude/longitude bounding boxes and averages, so that you can download based on that, and you should be able to do custom larger aggregations using those, if you’d like.

Carbon emission arithmetic + hard v. soft science

carbon emission arithmetic + hard v. soft science [oc] from dataisbeautiful

Data sources: Video From data-driven documentary The Fallen of World War II. Here and Here

Most popular Youtuber in every country 2021

What Does 1GB of Mobile Data Cost in Every Country?

What Does 1GB of Mobile Data Cost in Every Country?

Key Concepts of Data Science

A large dataset aimed at teaching AI to code, it consists of some 14M code samples and about 500M lines of code in more than 55 different programming languages, from modern ones like C++, Java, Python, and Go to legacy languages like COBOL, Pascal, and FORTRAN.

GitHub repo:

Download page

NSRDB: National Solar Radiation Database

 Download instructions are here

Cheat Sheet for Machine Learning, Data Science.

No photo description available.
Cheat Sheet for Machine Learning and Data Science

Emigrants from the UK by Destination

r/dataisbeautiful - [OC] Emigrants from the UK by Destination

Data source: Originally at the location marked on the Sankey Flow but is now here

Direct link to the spreadsheet used

US Rivers and Streams Dataset

Data source: https://hub.arcgis.com/

Data visualization

r/dataisbeautiful - [OC] US Rivers and Streams

Bubble Chart that compares the GDP of the G20 Countries

Data source: https://databank.worldbank.org/home.aspx

Desktop OS Market Share 2003 – 2021

[OC] Desktop OS Market Share 2003 – 2021 from dataisbeautiful

Data source: w3school

National Parks of North America

r/dataisbeautiful - [OC] National Parks of North America

Data Source: DataBayou

 NPS.gov, Open.canada.ca, and sig.conanp.gob.mx 

Inflation of Bitcoin and DogeCoin vs. Federal Reserve target

r/dataisbeautiful - [OC] Inflation of Bitcoin and DogeCoin vs. Federal Reserve target

Data source:

Percentage of women who experienced physical or sexual violence since the age of 15 in the EU

r/dataisbeautiful - Percentage of women who experienced physical or sexual violence since the age of 15 in the EU

Data Source from The Guardian: 

The whole report –  Questionnaire

Canadian Interprovincial Migration

Canadian Interprovincial Migration
Canadian Interprovincial Migration

Some context  here

Data  scraped from StatsCan

Covid-19 Vaccination Doses Administered per 100 in the G20

Data source: https://ourworldindata.org/covid-vaccinations

What does per 100 mean?

When the whole country is double vaccinated, the value will be 200 doses per 100 population. At the moment the UK is like 85, which is because ~70% of the population has had at least one dose and ~15% of the population (which is a subset of that 70%) have had two. Hence ~30% are currently unprotected – myself included until Sunday.

Import/Export of Conventional Arms by Different Countries over past 2 decades

DataSource: SIPRI Arms Transfer Database

Aggregated disease comparison dataset – Ensemble de données agrégées de comparaison des maladies

Data Source: Here and Here

According to the author of the source data: “For the 1918 Spanish Flu, the data was collected by knowing that the total counts were 500M cases and 50M deaths, and then taking a fraction of that per day based on the area of this graph image:” – the graph is used is here:

Visualización y conjunto de datos de comparación de enfermedades agregadas

Trending Google Searches by State Between 2018 and 2020 – Tendances des recherches Google par État

Data source: https://trends.google.com Trending topics from 2010 to 2019 were taken from Google’s annual Year in Search summary 2010-2029

The full, ~11 minute video covering the whole 2010s decade is available here at https://youtu.be/xm91jBeN4oo

Google Trends provides weekly relative search interest for every search term, along with the interest by state. Using these two datasets for each term, we’re able to calculate the relative search interest for every state for a particular week. Linear interpolation was used to calculate the daily search interest.

Market capitalization in billion dollars of Top 20 Cryptocurrencies in 2021-05-20 – crypto-monnaies

Data source: CoinMarket from end of 2013 until present

Capitalisation boursière en milliards de dollars des 20 principales crypto-monnaies en 2021-05-20

Top Chess Players From 2000-2020, Meilleurs joueurs d’échecs,  Лучшие шахматисты с 2000 по 2020 год

Data source: https://ratings.fide.com/

The y-axis is the world elo ratings (called FIDE ratings).

Comparing Emissions Sources – How to Shrink your Carbon Footprint More Effectively

r/dataisbeautiful - [OC] Comparing Emissions Sources - How to Shrink your Carbon Footprint More Effectively

 Data sources: Here

Source article: Here

Oil and gas-fired power plants in the world –

La dependencia de los combustibles fósiles – La dépendance aux énergies fossiles – 

r/dataisbeautiful - [OC] Oil and gas-fired power plants in the world

Data is from the Global Power Plant Database (World Resources Institute)

See map’s description here

Plantas de electricidad que funcionan con gas y petróleo

Top 100 Reddit posts of all time

r/dataisbeautiful - [OC] I recently made a graph showing the Top 100 Reddit posts of all time. Some people said I should make a pie chart too, so here it is!

Source: r/all on Reddit

Tool used: https://www.meta-chart.com

Fastest routes on land (and sometimes, boat) between all 990 pairs of European capitals

Las rutas más rápidas en tierra (y, a veces, en barco) entre los 990 pares de capitales europeas

Les itinéraires les plus rapides sur terre (et parfois en bateau) entre les 990 paires de capitales européennes

Source: Reddit

From the author: I started with data on roads from naturalearth.com, which also includes some ferry lines. I then calculated the fastest routes (assuming a speed of 90 km/h on roads, and 35 km/h on boat) between each pair of 45 European capitals. The animation visualizes these routes, with brighter lines for roads that are more frequently “traveled”.

In reality these are of course not the most traveled roads, since people don’t go from all capitals to all other capitals in equal measure. But I thought it would be fun to visualize all the possible connections.

The model is also very simple, and does not take into account varying speed limits, road conditions, congestion, border checks and so on. It is just for fun!

In order to keep the file size manageable, the animation only shows every tenth frame.

Is Russia, Turkey or country X really part of Europe? That of course depends on the definition, but it was more fun to include them than to exclude them! The Vatican is however not included since it would just be the same as the Rome routes. And, unfortunately, Nicosia on Cyprus is not included to due an error on my behalf. It should be!

Link to final still image in high resolution on my twitter

Pokemon Dataset

  1. Dataset of all 825 Pokemon (this includes Alolan Forms). It would be preferable if there are at least 100 images of each individual Pokemon.

https://github.com/veekun/pokedex: This is a Python library slash pile of data containing a whole lot of data scraped from Pokémon games. It’s the primary guts of veekun.

https://pokeapi.co/about

2) This dataset comprises of more than 800 pokemons belonging up to 8 generations.

Using this dataset have been fun for me. I used it to create a mosaic of pokemons taking image as reference. You can find it here and it’s free to use: Couple Mosaic (powered by Pokemons)

Here is the data type information in the file:

  • Name: Pokemon Name
  • Type: Type of Pokemon like Grass / Fire / Water etc..,.
  • HP: Hit Points
  • Attack: Attack Points
  • Defense: Defence Points
  • Sp. Atk: Special Attack Points
  • Sp. Def: Special Defence Points
  • Speed: Speed Points
  • Total: Total Points
  • url: Pokemon web-page
  • icon: Pokemon Image

Data File: Pokemon-Data.csv

30×30 m Worldwide High-Resolution Population and Demographics Data

ETL pipeline for Facebook’s research project to provide detailed large-scale demographics data. It’s broken down in roughly 30×30 m grid cells and provides info on groups by age and gender.

Population Density Overview

Data Source and API for access

Article about Dataset at Medium

Gridded global datasets for Gross Domestic Product and Human Development Index over 1990–2015

Rasterized GDP dataset – basically a heat map of global economic activity.

Gap-filled multiannual datasets in gridded form for Gross Domestic Product (GDP) and Human Development Index (HDI)

Data source here:

Decrease in worldwide infant mortality from 1950 to 2020

Post image

Data Sources: United Nations, CIA World Factbook, IndexMundi.

Data Collectors

Data Unblockers

Countries of the world sorted by those that have warmed the most in the last 10 years, showing temperatures from 1890 to 2020 

r/dataisbeautiful - Countries of the world sorted by those that have warmed the most in the last 10 years, showing temperatures from 1890 to 2020 [OC]

Data source: Gistemp temperature data

The GISS Surface Temperature Analysis ver. 4 (GISTEMP v4) is an estimate of global surface temperature change. Graphs and tables are updated around the middle of every month using current data files from NOAA GHCN v4 (meteorological stations) and ERSST v5 (ocean areas), combined as described in our publications Hansen et al. (2010) and Lenssen et al. (2019).

Climate change concern vs personal spend to reduce climate change

r/dataisbeautiful - [OC] Climate change concern vs personal spend to reduce climate change

Data Source: Competitive Enterprise Institute (PDF)

 Less than 20 firms produce over a third of all carbon emissions

The Illusion of Choice in Consumer Brands

The Illusion of Choice in Consumer Brands

Buying a chocolate bar? There are seemingly hundreds to choose from, but its just the illusion of choice. They pretty much all come from Mars, Nestlé, or Mondelēz (which owns Cadbury).

Source: Visual Capitalist

Yearly Software Sales on PlayStation Consoles since 1994

r/dataisbeautiful - [OC] Yearly Software Sales on PlayStation Consoles since 1994

Some context for these numbers :

  • PS4 holds the record for being the console to have sold the most games in video game history (> 1.622B units)
    • Previous record holder was PS2 at 1.537B games sold
  • PS4 holds the record for having sold the most games in a single year (> 300M units in FY20)
  • FY20 marks the biggest yearly software sales in PlayStation ecosystem with more than 338M units
  • Since PS5 release, Sony starts combining PS4/PS5 software sales
  • In FY12, Sony combined PS2/PS3 and PSP/VITA software sales
  • Sony stopped disclosing software sales in FY13/14

Yearly Hardware Sales of PlayStation Consoles since 1994

r/dataisbeautiful - [OC] Yearly Hardware Sales of PlayStation Consoles since 1994

Sony combined PS2/PS3 hardware sales in FY12 and combined PSP/VITA sales in FY12/13/14

Cybertruck vs F150 Lightning pre-orders, by time since debut

r/dataisbeautiful - [OC] Cybertruck vs F150 Lightning pre-orders, by time since debut

Source: Ford exec tweeting about preorder numbers this week

Top 100 Most Populous City Proper in the world

r/dataisbeautiful - (Fixed once again) Top 100 Most Populous City Proper in the world. [OC]

The City with 32 million is Chongqing, Shan is Shanghai, Beijin is Beijing, and Guangzho is Guangzhou

Tax data for different countries

Dataset is here

What do Europeans feel most attached to – their region, their country, or Europe?

r/dataisbeautiful - [OC] What do Europeans feel most attached to - their region, their country, or Europe?

Data source: Builds on data from the 2021 European Quality of Government Index. You can read more about the survey and download the data here

Cost of 1gb mobile data in every country

r/dataisbeautiful - Cost of 1gb mobile data in every country

r/dataisbeautiful - Cost of 1gb mobile data in every country

Dataset: Visual Capitalist

Frequency of all digrams in 18 languages, diacritics included 

r/dataisbeautiful - Langues germaniques

Dataset (according to author): Dictionaries are scattered on the internet and had to be borrowed from several sources: the Scrabble3d project, and Linux spellcheck dictionaries. The data can be found in the folder “Avec_diacritiques”.

Criteria for choosing a dictionary:
– No proper nouns
– “Official” source if available
– Inclusion of inflected forms
– Among two lists, the largest was fancied
– No or very rare abbreviations if possible- but hard to detect in unknown languages and across hundreds of thousands of words.

Mapped: The World’s Nuclear Reactor Landscape

r/dataisbeautiful - Mapped: The World’s Nuclear Reactor Landscape

Dataset: Visual Capitalist

Database of 999 chemicals based on liver-specific carcinogenicity

The author found this dataset in a more accessible format upon searching for the keyword “CDPB” (Carcinogenic Potency Database) in the National Library of Medicine Catalog. Check out this parent website for the data source and dataset description. The dataset referenced in OP’s post concerns liver specific carcinogens, which are marked by the “liv” keyword as described in the dataset description’s Tissue Codes section.

SMS Spam Collection Data Set

DownloadData FolderData Set Description

The SMS Spam Collection is a public set of SMS labeled messages that have been collected for mobile phone spam research

Open Datasets for Autonomous Driving

A2D2 DatasetApolloScape Dataset Argoverse Dataset Berkeley DeepDrive Dataset

CityScapes DatasetComma2k19 DatasetGoogle-Landmarks Dataset

KITTI Vision Benchmark SuiteLeddarTech PixSet Dataset Level 5 Open DatanuScenes Dataset

Oxford Radar RobotCar DatasetPandaSet Udacity Self Driving Car Dataset Waymo Open Dataset

Open Dataset people are looking for [Help if you can]

  1. Looking for Dataset on the outcomes of abstinence-only sex education.
  2. Looking for Data set of horse race results / lottery results any results related to gambling [1, 2, 3]
  3. Looking for Football (Soccer) Penalties Dataset [1, 2]
  4. Looking for public datasets on baseball [1, 2, 3]
  5. Looking for Datasets on edge computing for AI bandwidth usage, latency, memory, CPU/GPU resource usage? [1 ,2 ]
  6. Dataset of employee attrition or turnover rate? [1, 2]
  7. Is there a Dataset for homophobic tweets? [1 ,2, 3, 4, ]
  8. Looking for a Machine condition Monitoring Dataset [1,2]
  9. Where to find data for credit risk analysis? [1, 2]
  10. Datasets on homicides anywhere in the world [1, 2]
  11. Looking for a dataset containing coronavirus self-test (if this is a thing globally) pictures for ML use
  12. Looking for Beam alignment 5G vehicular networks dataset
  13. Looking for tidy dataset for multivariate analysis (PCA, FA, canonical correlations, clustering)
  14. Indian all types of Fuel location datasets
  15. Curated social network datasets with summary statistics and background info
  16. Looking for textile crop disease datasets such as jute, flax, hemp
  17. Shopify App Store and Chrome Webstore Datasets
  18. Looking for dataset for university chatbot
  19. Collecting real life (dirty/ugly) datasets for data analysis
  20. In Need of Food Additive/Ingredient Definition Database
  21. Recent smart phone sensor Dataset – Android
  22. Cracked Mobile Screen Image Dataset for Detection
  23. Looking for Chiller fault data in a chiller plant
  24. Looking for dataset that contains the genetic sequences of native plasmids?
  25. Looking for a dataset containing fetus size measurements at various gestational ages.
  26. Looking for datasets about mental health since 2021
  27. Do you know where to find a dataset with Graphical User Interfaces defects of web applications? [1, 2, 3 ]
  28. Looking for most popular accounts on social medias like Twitter, Tik Tok, instagram, [1, 2, 3]
  29. GPS dataset of grocery stores
  30. What is the easiest way to bulk download all of the data from this epidemiology website? (~20,000 files)
  31. Looking for Dataset on Percentage of death by US state and Canadian province grouped by cause of death?
  32. Looking for Social engineering attack dataset in social media
  33. Steam Store Games (Clean dataset) by Nik Davis
  34. Dataset that lists all US major hospitals by county
  35. Another Data that list all US major hospitals by county
  36. Looking for open source data relating privacy behavior or related marketing sets about the trustworthiness of responders?
  37. Looking for a dataset that tracks median household income by country and year
  38. Dataset on the number of specific surgical procedures performed in the US (yearly)
  39. Looking for a dataset from reddit or twitter on top posts or tweets related to crypto currency
  40. Looking for Image and flora Dataset of All Known Plants, Trees and Shrubs
  41. US total fertility rates data one the state level
  42. Dataset of Net Worth of *World* Politicians
  43. Looking for water wells and borehole datasets
  44. Looking for Crop growth conditions dataset
  45. Dataset for translate machine JA-EG
  46. Looking for Electronic Health Record (EHR) record prices
  47. Looking for tax data for different countries
  48. Musicians Birthday Datasets and Associated groups
  49. Searching for dataset related to car dealerships [1]
  50. Looking for Credit Score Approval dataset
  51. Cyberbullying Dataset by demographics
  52. Datasets on financial trends for minors
  53. Data where I can find out about reading habits? [1, 2]
  54. Data sets for global technology adoption rates
  55. Looking for any and all cat / feline cancer datasets, for both detection and treatment
  56. ITSM dictionary/taxonomy datasets for topic modeling purposes
  57. Multistage Reliability Dataset
  58. Looking for dataset of ingredients for food[1]
  59. Looking for datasets with responses to psychological questionnaires[1,2,3]
  60. Data source for OEM automotive parts
  61. Looking for dataset about gene regulation
  62. Customer Segmentation Datasets (For LTV Models)
  63. Automobile dataset, years of ownership and repairs
  64. Historic Housing Prices Dataset for Individual Houses
  65. Looking for the data for all the tokens on the Uniswap graph
  66. Job applications emails datasets, either rejection, applications or interviews
  67. E-learning datasets for impact on e learning on school/university students
  68. Food delivery dataset (Uber Eats, Just Eat, …)
  69. Data Sets for NFL Quarterbacks since 1995
  70. Medicare Beneficiary Population Data
  71. Covid 19 infected Cancer Patients datasets
  72. Looking for  EV charging behavior dataset
  73. State park budget or expansionary spending dataset
  74.  Autonomous car driving deaths dataset
  75. FMCG Spending habits over the pandemic
  76. Looking for a Question Type Classification dataset
  77. 20 years of Manufacturer/Retail price of Men’s footwear
  78. Dataset of Global Technology Adoption Rates
  79. Looking For Real Meeting Transcripts Dataset
  80. Dataset For A Large Archive Of Lyrics  [1,2,3]
  81. Audio dataset with swearing words
  82. A global, georeferenced event dataset on electoral violence with lethal outcomes from 1989 to 2017. [1,]
  83. Looking for Jaundice Dataset for ML model
  84. Looking for social engineering attack detection dataset?
  85. Wound image datasets to train ML model [1]
  86. Seeking for resume and job post dataset
  87. Labelled dataset (sets of images or videos) of human emotions [1,2]
  88. Dataset of specialized phone call transcripts
  89. Looking for Emergency Response Plan Dataset for family Homes, condo buildings and Companies
  90. Looking for Birthday wishes datasets
  91. Desperately in need of national data for real estate [1,2,]
  92. NFL playoffs games stadium attendance dataset
  93. Datasets with original publication dates of novels [1,2]
  94. Annotated Documents with Images Data Dump
  95. Looking for  dataset for “Face Presentation Attack Detection”
  96. Electric vehicle range & performance dataset [1, 2]
  97. Dataset or API with valid postal codes for US, Mexico, and Canada with country, state/province, and city/town [1, 2, 3, 4, 5, 6]
  98. Looking for Data sources regarding Online courses dropout rate, preferably by countries [1,2 ]
  99. Are there dataset for language learning [1, 2]
  100. Corporate Real Estate Data [1,2, 3]
  101. Looking for simple clinical trials datasets [1, 2]
  102. CO2 Emissions By Aircraft (or Aircraft Type) – Climate Analysis Dataset [1,2, 3, 4]
  103. Player Session/playtime dataset from games [1,2]
  104. Data sets that support Data Science (Technology, AI etc) being beneficial to sustainability [1,2]
  105. Datasets of a grocery store [1,2]
  106. Looking for mri breast cancer annotation datasets [1,2]
  107. Looking for free exportable data sets of companies by industry [1,2]
  108. Datasets on Coffee Production/Consumption [1,2]
  109. Video gaming industry datasets – release year, genre, games, titles, global data  [1,2]
  110. Looking for mobile speaker recognition dataset [1,2]
  111. Public DMV vehicle registration data [1,2]
  112. Looking for historical news articles based on industry sector [1,2]
  113. Looking for Historical state wide Divorce dataset [1,2]
  114. Public Big Datasets – with In-Database Analytics [1,2]
  115. Dataset for detecting Apple products (object detection) [1,2]
  116. Help needed to get the American Hospital Association (AHA) datasets (AHA Annual Survey, AHA Financial Database, and AHA IT Survey datasets)  [1, 2]
  117. Looking for help Getting College Football Betting Data [1,2]
  118. 2012-2020 US presidential election results by state/city dataset [1,2, 3]
  119. Looking for datasets of models and images captured using iphone’s LIDAR? [1,2]
  120. Finding Datasets to Age Texts (Newspapers, Books, Anything works) [1, 2, 3]
  121. Looking for cost of living index of some type for US [1,2]
  122. Looking for dataset that recorded historical NFT prices and their price increases, as well as timestamps. [1,2]
  123. Looking for datasets on park boundaries across the country [1, 2, 3]
  124. Looking for medical multimodal datasets [1, 2, 3]
  125. Looking for Scraped Parler Data [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
  126. Looking for Silicon Wafer Demand dataset [1, 2]
  127. Looking for a dataset with the values [Gender – Weight – Height – Health] [1, 2]
  128. Exam questions (mcqs and short answer) datasets? [1, 2]
  129. Canada Botanical Plants API/Database [1, 2, 3]
  130. Looking for a geospatial dataset of birds Migration path [1, 2, 3]
  131. WhatsApp messages dataset/archives [1, 2]
  132.  Dataset of GOOD probiotic microorganisms in the HUMAN gut [1, 2]
  133. Twitter competition to reduce bias in its image cropping [1,2]
  134. Dataset: US overseas military deployments, 1950–2020 [1,2]
  135. Dataset on human clicking on desktop [1,2]
  136. Covid-19 Cough Audio Classification Dataset [1, 2]
  137. 12,000+ known superconductors database [1, 2, 3]
  138. Looking for good dataset related to cyber security for prediction [1, 2]
  139. Where can I find face datasets to classify whether it is a real person or a picture of that person. For authentication purposes? [1,2]
  140. DataSet of Tokyo 2020 (2021) Olympics ( details about the Athletes, the countries they representing, details about events, coaches, genders participating in each event, etc.) [1, 2]
  141. What is your workflow for budget compute on datasets larger than 100GB? [1, 2, 3]
  142. Looking for a dataset that contains information about cryptocurrencies. [1, 2

  143. Looking for a depression dataset [1,2, 3]

  144. Looking for chocolate consumer demographic data [1,2, 3]
  145. Looking for thorough dataset of housing price/tax history [1, 2, 3]
  146. Wallstreetbets data scraping from 01/01/2020 to 01/06/2021 [1, 2]
  147. Retinal Disease Classification Dataset [1, 2]
  148. 400,000 years of CO2 and global temperature data [1, 2, 3]
  149. Looking for datasets on neurodegenerative diseases [1, 2, 3]
  150. Dataset for Job Interviews (either Phone, Online, or Physical) [1,2 ,3]
  151. Firm Cyber Breach Dataset with Firm Identifiers [1, 2, 3]
  152. Wondering how Stock market and Crypto website get the Data from [1, 2, 3, 4, 5]
  153. Looking for a dataset with US tourist injuries, attacks, and/or fatalities when traveling abroad [1, 2, 3]
  154. Looking for Wildfires Database for all countries by year and month? The quantity of wildfires happening, the acreage, things like that, etc.. [1, 2, 3, ]
  155. Looking for a pill vs fake pill image dataset [1, 2, 3, 4, 5, 6, 7]

Cars for sale in Germany from 2011 to 2021

Dataset obtained scraping AutoScout. In the file, you will find features describing 46405 vehicles: mileage, make, model, fuel, gear, offer type, price, horse power, registration year.

Dataset scraped from AutoScout24 with information about new and used cars.

Percentage of female students in higher education by subject area

r/dataisbeautiful - [OC] Percentage of female students in higher education by subject area

The data was obtained from the UK government website here , so unfortunately there are some things I’m unaware of regarding data and methodology.

All the passes: A visualization of ~1 million passes from 890 matches played in major football/soccer leagues/cups

  •  Champion League 1999
  • FA Women’s Super League 2018
  • FIFA World Cup 2018, La Liga 2004 – 2020
  • NWSL 2018
  • Premier League 2003 – 2004
  • Women’s World Cup 2019

1million+ football/soccer passes visualization

Data Source: StatsBomb

Global “Urbanity” Dataset (using population mosaics, nighttime lights, & road networks

In this project, the authors  have designed a spatial model which is able to classify urbanity levels globally and with high granularity. As the target geographic support for our model we selected the quadkey grid in level 15, which has cells of approximately 1x1km at the equator.

Dataset:  Here 

Percentage of students with disabilities in higher education by subject area

r/dataisbeautiful - [OC] Percentage of students with disabilities in higher education by subject area

The author obtained the data from the UK Government website, so unfortunately don’t know the methodology or how they collected the data etc. 

The comparison to the general public is  a great idea – according to the Government site, 6% of children, 16% of working-age adults and 45% of Pension-age adults are disabled.

Dataset: here

Arrests for Hate Crimes in NYC by Category, 2017-2020

r/dataisbeautiful - [OC] Arrests for Hate Crimes in NYC by Category, 2017-2020

The Most Successful U.S. Sports Franchises

r/dataisbeautiful - [OC] The Most Successful U.S. Sports Franchises

Data source: https://www.sports-reference.com/

Adult cognitive skills (PIAAC literacy and numeracy) by Percentile and by country

According to the author (https://www.reddit.com/user/newpua_bie/) , this animation depicts adult cognitive skills, as measured by the PIAAC study by OECD. Here, the numeracy and literacy skills have been combined into one. Each frame of the animation shows the xth percentile skill level of each individual country. Thus, you can see which countries have the highest and lowest scores among their bottom performers, median performers, and top performers. So for example, you can see that when the bottom 1st percentile of each country is ranked, Japan is at the top, Russia is second, etc. Looking at the 50th percentile (median) of each country, Japan is top, then Finland, etc.

 Programme for the International Assessment of Adult Competencies (PIAAC) is a study by OECD to measure measured literacy, numeracy, and “problem-solving in technology-rich environments” skills for people ages 16 and up. For those of you who are familiar with the school-age children PISA study, this is essentially an adult version of it.

Dataset: PIAAC 

G7 Corporate Tax rate 1980 – 2020

r/dataisbeautiful - G7 Corporate Tax rate 1980 - 2020 [OC]

Dataset: Tax Foundation

 Euro 2020 (played in 2021) Group Stage Predictions Based of a Bayesian Linear Item Response Model

r/dataisbeautiful - [OC] Euro 2020 (played in 2021) Group Stage Predictions Based of a Bayesian Linear Item Response Model

Data Source: UEFA qualifying match data

The model was built in Stan and was inspired by Andrew Gelman’s World Cup model shown here. These plots show posterior probabilities that the team on the y axis will score more goals than the team on the x axis. There is some redundancy of information here (because if I know P(England beats Scotland) then I know P(Scotland beats England) )

Data

Source: Italian National Institute of Statistics (Istituto Nazionale di Statistica)

The 15 most shared musicians on Reddit

r/dataisbeautiful - [OC] The 15 most shared musicians on Reddit

Data source: The authors made a dataset of YouTube and Spotify shares on Reddit. More info available here

Spam vs. Legitimate Email, Average Global Emails per Day

r/dataisbeautiful - Spam vs. Legitimate Email, Average Global Emails per Day [OC]

Data Source: Here. The author  computed the average per day over the June 3 – June 9, 2021 period.

spam vs legitimate email 2021

Falling Fertility, 1800–2016

Data source: Here (go to the “Babies per woman,” “Income,” and “Population” links on that page).

Europe Covid-19 waves

r/dataisbeautiful - Europe Covid-19 waves [OC]

Data Source: Here

Who is going to win EURO 2020? Predicted probabilities pooled together from 18 sources

r/dataisbeautiful - Who is going to win EURO 2020? Predicted probabilities pooled together from 18 sources [OC]

Data source: Here

Population Density of Canada 2020

r/dataisbeautiful - [OC] Population Density of Canada 2020

DataSet:  Gathered from https://www.worldpop.org/project/

The greater the length of each spike correlates to greater population density.

The portion of a country’s population that is fully vaccinated for COVID (as of June 2021) scales with GDP per capita.

r/dataisbeautiful - [OC] The portion of a country's population that is fully vaccinated for COVID (as of June 2021) scales with GDP per capita.

Dataset of Chemical reaction equations

1-  https://chemequations.com/en/

2- Kaggle chemistry section 

3- Reaction datasets 

4- Chemistry datasets

5- BiomedCentral 

Maths datasets

1111 2222 3333 Equation Learning 

Datasets for Stata Structural Equation Modeling

Mathematics Dataset

SQL Queries Dataset 

SEDE (Stack Exchange Data Explorer) is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction. These pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. Access it here

Countries of the world, ranked by population, with the 100 largest cities in the world marked

According to the author:

Each map size is proportional to population, so China takes up about 18-19% of the map space.

Countries with very far-flung territories, such as France (or the USA) will have their maps shrunk to fit all territories. So it is the size of the map rectangle that is proportional to population, not the colored area. Made in R, using data from naturalearthdata.com. Maps drawn with the tmap package, and placed in the image with the gridExtra package. Map colors from the wesanderson package.

Data source: The Economist

What businesses in different countries search for when they look for a marketing agency – “creative” or “SEO”?

r/dataisbeautiful - What businesses in different countries search for when they look for a marketing agency - "creative" or "SEO"? [OC]

Data source: Google Trends

More maps, charts and written analysis on this topic here

Is the economic gap between new and old EU countries closing?

Post image

Data source:  Eurostat

Interactive version so you can click on those circles here

Reddit r/wallstreetbets posts and comments in real-time

  • Posts

  • Comments

  • Beneath adds some useful features for shared data, like the ability to run SQL queries, sync changes in real-time, a Python integration, and monitoring. The monitoring is really useful as it lets you check out the write activity of the scraper (no surprise, WSB is most active when markets are open
  • The scraper (which uses Async PRAW) is open source here

Global NO2 pollution data visualization June 2021

Data Source: SILAM

Shopify App Store Report: 2021

Data source: Marketplace Apps

The Chrome Webstore Report: 2021

Data source: Marketplace Apps

Percentage of Adults with HIV/AIDS in Africa

r/dataisbeautiful - [OC] Percentage of Adults with HIV/AIDS in Africa

Dataset:  All the countries through the UN AIDS organization 

Recorded CDC deaths (2014 – June 16, 2021) from Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)

r/dataisbeautiful - [OC] Recorded CDC deaths (2014 - June 16, 2021) from Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)

Data Source: combined CDC weekly death counts 2014 – 2019 and CDC weekly death counts 2020-2021

What are the long term gains on cryptocurrencies?

r/dataisbeautiful - What are the long term gains on cryptocurrencies? [OC]

Data Sources: investing.com and coingecko.com

The chart shows the average daily gain in $ if $100 were invested at a date on x-axis. Total gain was divided by the number of days between the day of investing and June 13, 2021. Gains were calculated on average 30-day prices.

Time range: from March 28, 2013, till June 13, 2021

Life Expectancy and Death Probability by Age and Gender

r/dataisbeautiful - [OC] Life Expectancy and Death Probability by Age and Gender

Data source: Here

Daily Coronavirus cases in Canada vs % of Population Vaccinated

r/dataisbeautiful - Daily Coronavirus cases in Canada vs % of Population Vaccinated [OC]

Data Source: Cases Vaccines

Google Playstore Apps with 2.3million app data on Kaggle

Google Playstore dataset is now available with double the data (2.3 Million) android application data and a new attribute stating the scraped date time in Kaggle.

Dataset: Get it here or here

African languages dataset

We have 3000 tribes or more in Africa and in that 3000 we have sub tribes.

1 Introduction to African Languages (Harvard)

2- Languages of the world at Ethnologue

3- Britannica: Nilo-Saharan Laguages

4- Britannica: Khoisan Languages

Daily Temperature of Major Cities Dataset

Daily average temperature values recorded in major cities of the world.

 The dataset is available as separate txt files for each city here. The data is available for research and non-commercial purposes only

 Do stricter gun laws reduce firearms homicides?

r/dataisbeautiful - [OC] Do stricter gun laws reduce firearms homicides?

Data Sources: Guns to CarryEFSGVCDC

According to the author: Looking at non-suicide firearms deaths by state (2019), and then grouping by the Guns to Carry rating (1-5 stars), it seems that stricter gun laws are correlated with fewer firearms homicides. Guns to Carry rates states based on “Gun friendliness” with 1 star being least friendly (California, for example), and 5 stars being most friendly (Wyoming, for example). The ratings aren’t perfect but they include considerations like: Permit required, Registration, Open carry, and Background checks to come up with a rating.

The numbers at the bottom are the average non-suicide deaths calculated within the rating group. Each bar shows the number for the individual state.

Interesting that DC is through the roof despite having strict laws. On the flip side, Maine is very friendly towards gun owners and has a very low homicide rate, despite having the highest ratio of suicides to homicides.

Obviously, lots of things to consider and this is merely a correlation at a basic level. This is a topic that interested me so I figured I’d share my findings. Not attempting to make a policy statement or anything.

Relative frequency of words in economics textbooks vs their frequency in mainstream English (the Google Books corpus)

r/dataisbeautiful - [OC] Relative frequency of words in economics textbooks vs their frequency in mainstream English (the Google Books corpus)

Author

Data Source: Data for word frequency in the Google corpus is from the 2019 Ngram dataset. For details about how to work with this data, see Working With Google Ngrams: A Data-Wrangling Tale.

Data for word frequency in econ textbooks was compiled by myself by scraping words from 43 undergraduate economics textbooks. For details see Deconstructing Econospeak.

Hours per day spent on mobile devices by US adults

r/dataisbeautiful - [OC] Hours per day spent on mobile devices by US adults

Author: nava_7777

Data Source: from eMarketer, as quoted byJon Erlichman

Purpose according to the author: raw textual numbers (like in the original tweet) are hard to compare, particularly the acceleration or deceleration of a trend. Did for myself, but maybe is useful to somebody.

Environmental Impact of Coffee Brewing Methods

r/dataisbeautiful - [OC] Environmental Impact of Coffee Brewing Methods

Author: Coffee_Medley

Data Source: 1 2 3

More according to the author:

  • Measurements and calculations of NG and Electricity used to heat four cups of distilled water by Coffee Medley (6/14/2021)

  • Average coffee bag and pod weight by Coffee Medley (6/14/2021)

Murders in major U.S. Cities: 2019 vs. 2020

r/dataisbeautiful - [OC] Murders in major U.S. Cities: 2019 vs. 2020

Author: datacanbeuseful

Data source: NPR

New Harvard Data (Accidentally) Reveal How Lockdowns Crushed the Working Class While Leaving Elites Unscathed

Data source: Harvard

Support for same-sex marriage by religious group

r/dataisbeautiful - Support for same-sex marriage by religious group [OC]

Data source: PEW

More: Summary of religiously (un)affiliated people’s views on homosexuality, broken down into 18 countries

Daily chance of dying for Americans

r/dataisbeautiful - Daily chance of dying for Americans [OC]

Author: NortherSugarLoaf

Data source: SSA Actuarial Data

Processing: Yearly probability of death is converted to the daily probability and expressed in micromorts. Plotted versus age in years.

Micromort:

According to the author,

A few things to notice: It’s dangerous to be a newborn. The same mortality rates are reached again only in the fifties. However, mortality drops after birth very quickly and the safest age is about ten years old. After experiencing mortality jump in puberty – especially high for boys, mortality increases mostly exponentially with age. Every thirty years of life increase chances of dying about ten times. At 80, chance of dying in a year is about 5.8% for males and 4.3% for females. This mortality difference holds for all ages. The largest disparity is at about twenty three years old when males die at a rate about 2.7 times higher than females.

This data is from before COVID.

Here is the same graph but in linear Y axis scale

Here is the male to female mortality ratio

Mapping Global Carbon Emission Intensity (Dec 2020)

r/dataisbeautiful - [OC] Mapping Global Carbon Emission Intensity (Dec 2020)

Data Source: Copernicus Atmosphere Monitoring Service (CAMS)

Religions with the most Adherents from 1945 – 2010

This image has an empty alt attribute; its file name is image.png

Data source: Zeev Maoz and Errol A. Henderson. 2013. “The World Religion Dataset, 1945-2010: Logic, Estimates, and Trends.” International Interactions, 39: 265-291.

IPO Returns 2000-2020

IPO Returns 2000-2020

IPO Returns 2000-2020

IPO Returns 2000-2020

Data from: iposcoop.com
From the author u/nobjos: The full article on the above analysis can be found here
I have sub r/market_sentiment where I do a comprehensive deep-dive on one investment strategy/topic every week! Some of the author popular articles are
a. Performance of Jim Cramer’s stock picks
b. Performance of buy and sell recommendations made by financial analysts in the last decade
c. Benchmarking performance of Motely fool against SP500
Funko IPO is considered to have the worst first-day return for an IPO in the last two decades.
Out of the top 10 list, only 3 Investment banks had below-average returns.
On average, IPOs did make money for the investor. But the amount is significantly different if you got allocated the IPO at offer price vs you get the IPO at market price.
Baidu.com made a whopping 354% on its listing day. Another interesting observation is 6 out of 10 companies in the list were listed in 200 (just before the dot com crash)

Total number of streams per artist vs. number of Top 200 hits (Spotify Top 200 since 2017)

r/dataisbeautiful - [OC] Total number of streams per artist vs. number of Top 200 hits (Spotify Top 200 since 2017)

Author: blairfix

Data is from the Spotify Top 200 and covers the period from Jan. 1, 2017 to Jun. 9, 2021. You can download my dataset here.

For every artist that appears in the Top 200, I add up their total streams (for all songs) and the total number of songs in the dataset.

For a commentary on the data, see The Half Life of a Spotify Hit.

Number of Miss Americas by U.S. State

r/dataisbeautiful - [OC] Number of Miss Americas by U.S. State

Data Source: Wikipedia

The World’s Nuclear Warheads

r/dataisbeautiful - [OC] The World's Nuclear Warheads

Author: academiadvice

Data Source: Federation of American Scientists – https://fas.org/issues/nuclear-weapons/status-world-nuclear-forces/

Tools: Excel, Datawrapper, https://coolors.co/

Check out the FAS site for notes and caveats about their estimates. Governments don’t just print this stuff on their websites. These are evidence-based estimates of tightly-guarded national secrets.

Of particular note – Here’s what the FAS says about North Korea: “After six nuclear tests, including two of 10-20 kilotons and one of more than 150 kilotons, we estimate that North Korea might have produced sufficient fissile material for roughly 40-50 warheads. The number of assembled warheads is unknown, but lower. While we estimate North Korea might have a small number of assembled warheads for medium-range missiles, we have not yet seen evidence that it has developed a functioning warhead that can be delivered at ICBM range.”

The population of Las Vegas over time

r/dataisbeautiful - [OC] The population of Las Vegas over time

Data Source: Wikipedia

 The Alpha to Omega of Wikipedia

r/dataisbeautiful - [OC] The Alpha to Omega of Wikipedia

Author: feldesque

Data Source: The wikipediatrend package in R

Code published here

Glacial Inter-glacial cycles over the past 450000 years

Source:  https://geology.utah.gov/

Global temperature change from 1850-2020

r/dataisbeautiful - Global temperature change from 1850-2020

Worth noting these are largely driven by changes in the amount of solar radiation reaching us due to variations in earth’s orbit

Top Companies Contributing to Open Source – 2011/2021

Source and links

The author used several sources for this video and article. The first, for the video, is GitHub Archive & CodersRank. For the analysis of the OSCI index data, the author used https://opensourceindex.io/.

Crime Rates in the US: 1960-2021

r/dataisbeautiful - [OC] Crime Rates in the US: 1960-2021

Data source: Here

Here

2021 is straight projections, must be taken with a grain of salt. However, the assumption of continuous rise of murder rate is not a bad one based on recent news reports, such as: here

In a property crime, a victim’s property is stolen or destroyed, without the use or threat of force against the victim. Property crimes include burglary and theft as well as vandalism and arson.

A network visualization of privacy research (83k nodes, 462k edges)

r/dataisbeautiful - [OC] A network visualisation of privacy research (83k nodes, 462k edges)

Author: FvDijk

This image was generated for my research mapping the privacy research field. The visual is a combination of network visualisation and manual adding of the labels.

The data was gathered from Scopus, a high-quality academic publication database, and the visualisation was created with Gephi. The initial dataset held ~120k publications and over 3 million references, from which we selected only the papers and references in the field.

The labels were assigned by manually identifying clusters and two independent raters assigning names from a random sample of publications, with a 94% match between raters.

The scripts used are available on Github

The full paper can be found on the author’s website:

 

GDP (at purchasing power parity) per capita in international dollars

r/dataisbeautiful - [OC] GDP (at purchasing power parity) per capita in international dollars

Author:  Simaniac

Data source: IMF

Phone Call Anxiety dataset for Millennials and Gen Z

r/dataisbeautiful - Phone Call Anxiety is a real thing for Millennials and Gen Z [OC]

Author: /u/CynicalScyntist

This is a randomized experiment the author  conducted with 450 people on Amazon MTurk. Each person was randomly assigned to one of three writing activities in which they either (a) described their phone, (b) described what they’d do if they received a call from someone they know, or (c) describe what they’d do if they received a call from an unknown number. Pictures of an iPhone with a corresponding call screen were displayed above the text box (blank, “Incoming Call,” or “Unknown”). Participants then rated their anxiety on a 1-4 scale.

Dataset: Here

Source Article

Hate Crime Statistics in New York State 2019-2021

Hate Crime Statistics NYC 2019-2021

Continue reading “Data Sciences – Top 400 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes”

Djamgatech: Multilingual and Platform Independent Cloud Certification and Education App for AWS, Azure, Google Cloud

Djamgatech: AI Driven Continuing Education and Certification Preparation Platform
Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

The Cloud Education Certification App is an EduFlix App for AWS, Azure, Google Cloud Certification Prep [Android, iOS]

Technology is changing and is moving towards the cloud. The cloud will power most businesses in the coming years and is not taught in schools. How do we ensure that our kids and youth and ourselves are best prepared for this challenge?

Building mobile educational apps that work offline and on any device can help greatly in that sense.

The ability to tab on a button and learn the cloud fundamentals and take quizzes is a great opportunity to help our children and youth to boost their job prospects and be more productive at work.

The App covers the following certifications :
AWS Cloud Practitioner Exam Prep CCP CLF-C01, Azure Fundamentals AZ 900 Exam Prep, AWS Certified Solution Architect Associate SAA-C02 Exam Prep, AWS Certified Developer Associate DVA-C01 Exam Prep, Azure Administrator AZ 104 Exam Prep, Google Associate Cloud Engineer Exam Prep, Data Analytics for AWS DAS-C01, Machine Learning for AWS and Google, AWS Certified Security – Specialty (SCS-C01), AWS Certified Machine Learning – Specialty (MLS-C01), Google Cloud Professional Machine Learning Engineer and more… [Android, iOS]

Djamgatech
Djamgatech
Developer:
Djamgatech Corp
Price:
Free+

  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot


‎Djamgatech
‎Djamgatech
Developer:
DjamgaTech Corp
Price:
Free+

  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

The App covers the following cloud categories:

AWS Technology, AWS Security and Compliance, AWS Cloud Concepts, AWS Billing and Pricing , AWS Design High Performing Architectures, AWS Design Cost Optimized Architectures, AWS Specify Secure Applications And Architectures, AWS Design Resilient Architecture, Development With AWS, AWS Deployment, AWS Security, AWS Monitoring, AWS Troubleshooting, AWS Refactoring, Azure Pricing and Support, Azure Cloud Concepts , Azure Identity, governance, and compliance, Azure Services , Implement and Manage Azure Storage, Deploy and Manage Azure Compute Resources, Configure and Manage Azure Networking Services, Monitor and Backup Azure Resources, GCP Plan and configure a cloud solution, GCP Deploy and implement a cloud solution, GCP Ensure successful operation of a cloud solution, GCP Configure access and security, GCP Setting up a cloud solution environment, AWS Incident Response, AWS Logging and Monitoring, AWS Infrastructure Security, AWS Identity and Access Management, AWS Data Protection, AWS Data Engineering, AWS Exploratory Data Analysis, AWS Modeling, AWS Machine Learning Implementation and Operations, GCP Frame ML problems, GCP Architect ML solutions, GCP Prepare and process data, GCP Develop ML models, GCP Automate & orchestrate ML pipelines, GCP Monitor, optimize, and maintain ML solutions, etc.. [Android, iOS]

Cloud Education and Certification

The App covers the following Cloud Services, Framework and technologies:

AWS: VPC, S3, DynamoDB, EC2, ECS, Lambda, API Gateway, CloudWatch, CloudTrail, Code Pipeline, Code Deploy, TCO Calculator, SES, EBS, ELB, AWS Autoscaling , RDS, Aurora, Route 53, Amazon CodeGuru, Amazon Bracket, AWS Billing and Pricing, Simply Monthly Calculator, cost calculator, Ec2 pricing on-demand, IAM, AWS Pricing, Pay As You Go, No Upfront Cost, Cost Explorer, AWS Organizations, Consolidated billing, Instance Scheduler, on-demand instances, Reserved instances, Spot Instances, CloudFront, Workspace, S3 storage classes, Regions, Availability Zones, Placement Groups, Amazon lightsail, Redshift, EC2 G4ad instances, DAAS, PAAS, IAAS, SAAS, NAAS, Machine Learning, Key Pairs, AWS CloudFormation, Amazon Macie, Amazon Textract, Glacier Deep Archive, 99.999999999% durability, AWS Codestar, Amazon Neptune, S3 Bucket, EMR, SNS, Desktop As A Service, Emazon EC2 for Mac, Aurora Postgres SQL, Kubernetes, Containers, Cluster.

Azure: Virtual Machines, Azure App Services, Azure Container Instances (ACI), Azure Kubernetes Service (AKS), and Windows Virtual Desktop, Virtual Networks, VPN Gateway, Virtual Network peering, and ExpressRoute, Container (Blob) Storage, Disk Storage, File Storage, and storage tiers, Cosmos DB, Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, and SQL Managed Instance, Azure Marketplace, Azure consumption-based mode, management groups, resources and RG, Geographic distribution concepts such as Azure regions, region pairs, and AZ Internet of Things (IoT) Hub, IoT Central, and Azure Sphere, Azure Synapse Analytics, HDInsight, and Azure Databricks, Azure Machine Learning, Cognitive Services and Azure Bot Service, Serverless computing solutions that include Azure Functions and Logic Apps, Azure DevOps, GitHub, GitHub Actions, and Azure DevTest Labs, Azure Mobile, Azure Advisor, Azure Resource Manager (ARM) templates, Azure Security, Privacy and Workloads, General security and network security, Azure security features, Azure Security Centre, policy compliance, security alerts, secure score, and resource hygiene, Key Vault, Azure Sentinel, Azure Dedicated Hosts, Concept of defense in depth, NSG, Azure Firewall, Azure DDoS protection, Identity, governance, Conditional Access, Multi-Factor Authentication (MFA), and Single Sign-On (SSO),Azure Services, Core Azure architectural components, Management Groups, Azure Resource Manager,

Google Cloud Platform: Compute Engine, App Engine, BigQuery, Bigtable, Pub/Sub, flow logs, CORS, CLI, pod, Firebase, Cloud Run, Cloud Firestore, Cloud CDN, Cloud Storage, Persistent Disk, Kubernetes engine, Container registry, Cloud Load Balancing, Cloud Dataflow, gsutils, Cloud SQL,

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Cloud Education Certification: Eduflix App for Cloud Education and Certification (AWS, Azure, Google Cloud) [Android, iOS]

Features:
– Practice exams
– 1000+ Q&A updated frequently.
– 3+ Practice exams per Certification
– Scorecard / Scoreboard to track your progress
– Quizzes with score tracking, progress bar, countdown timer.
– Can only see scoreboard after completing the quiz.
– FAQs for most popular Cloud services
– Cheat Sheets
– Flashcards
– works offline

Note and disclaimer: We are not affiliated with AWS, Azure, Microsoft or Google. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.