Data Sciences – Top 300 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes

Data Sciences - Data Analytics

Data Center Proxies - Data Collectors - Data Unblockers

Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains.

In this blog, we are going to provide popular open source and public data sets, data visualization, data analytics and data lakes.

Latest complete Netflix movie dataset

Created from 4 APIs. 11K+ rows and 30+ attributes of Netflix (Ratings, earnings, actors, language, availability, movie trailers, and many more)

Data Center Proxies - Data Collectors - Data Unblockers

Dataset on Kaggle.

Explore this dataset using FlixGem.com (this dataset is powering this webapp)

Dataset on Google Sheets.

AWS Azure Google Cloud Cloud Certification Exam Prep App
AWS Azure Google Cloud Cloud Certification Exam Prep App: AWS CCP Cloud Practitioner CLF-C01, AWS Solution Architect Associate SAA-C02, AWS Developer Associate DEV-C01, AWS DAS-C01, Azure Fundamentals AZ900, Azure Administrator AZ104, Google Associate Cloud Engineer, AWS Specialty Data Analytics DAS-C01, AWS and Google Professional Machine Learning Specialty MLS-C01

Common Crawl

A corpus of web crawl data composed of over 50 billion web pages. The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata and text extractions.

AWS CLI Access (No AWS account required)

aws s3 ls s3://commoncrawl/ --no-sign-request

s3://commoncrawl/crawl-data/CC-MAIN-2021-17 – April 2021

 Dataset on protein prices

Data on Primary Commodity Prices are updated monthly based on the IMF’s Primary Commodity Price System.

Excel Database

 CPOST dataset on suicide attacks over four decades

The University of Chicago Project on Security and Threats presents the updated and expanded Database on Suicide Attacks (DSAT), which now links to Uppsala Conflict Data Program data on armed conflicts and includes a new dataset measuring the alliance and rivalry relationships among militant groups with connections to suicide attack groups. Access it here.

Credit Card Dataset – Survey of Consumer Finances (SCF) Combined Extract Data 1989-2019

 You can do a lot of aggregated analysis in a pretty straightforward way there.

Drone imagery with annotations for small object detection and tracking dataset

11 TB dataset of drone imagery with annotations for small object detection and tracking

Download and more information are available here

Dataset License: CDLA-Sharing-1.0

Helper scripts for accessing the dataset: DATASET.md

Dataset Exploration: Colab

NOAA High-Resolution Rapid Refresh (HRRR) Model

The HRRR is a NOAA real-time 3-km resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model, initialized by 3km grids with 3km radar assimilation. Radar data is assimilated in the HRRR every 15 min over a 1-h period adding further detail to that provided by the hourly data assimilation from the 13km radar-enhanced Rapid Refresh.

Registry of Open Data on AWS

This registry exists to help people discover and share datasets that are available via AWS resources. Learn more about sharing data on AWS.

See all usage examples for datasets listed in this registry.

See datasets from Digital Earth AfricaFacebook Data for GoodNASA Space Act AgreementNIH STRIDESNOAA Big Data ProgramSpace Telescope Science Institute, and Amazon Sustainability Data Initiative.

Textbook Question Answering (TQA)

1,076 textbook lessons, 26,260 questions, 6229 images

Documentation: https://allenai.org/data/tqa

Download

Harmonized Cancer Datasets: Genomic Data Commons Data Portal

The GDC Data Portal is a robust data-driven platform that allows cancer
researchers and bioinformaticians to search and download cancer data for analysis.

Genomic Data Commons Data Portal
Genomic Data Commons Data Portal

The Cancer Genome Atlas

The Cancer Genome Atlas (TCGA), a collaboration between the National Cancer Institute (NCI) and National Human Genome Research Institute (NHGRI), aims to generate comprehensive, multi-dimensional maps of the key genomic changes in major types and subtypes of cancer.

AWS CLI Access (No AWS account required)

aws s3 ls s3://tcga-2-open/ --no-sign-request

Therapeutically Applicable Research to Generate Effective Treatments (TARGET)

The Therapeutically Applicable Research to Generate Effective Treatments (TARGET) program applies a comprehensive genomic approach to determine molecular changes that drive childhood cancers. The goal of the program is to use data to guide the development of effective, less toxic therapies. TARGET is organized into a collaborative network of disease-specific project teams.  TARGET projects provide comprehensive molecular characterization to determine the genetic changes that drive the initiation and progression of childhood cancers. The dataset contains open Clinical Supplement, Biospecimen Supplement, RNA-Seq Gene Expression Quantification, miRNA-Seq Isoform Expression Quantification, miRNA-Seq miRNA Expression Quantification data from Genomic Data Commons (GDC), and open data from GDC Legacy Archive. Access it here.

Genome Aggregation Database (gnomAD)

The Genome Aggregation Database (gnomAD) is a resource developed by an international coalition of investigators that aggregates and harmonizes both exome and genome data from a wide range of large-scale human sequencing projects. The summary data provided here are released for the benefit of the wider scientific community without restriction on use. Downloads

SQuAD (Stanford Question Answering Dataset)

Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Access it here.

PubMed Diabetes Dataset

The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details.

Download Link

Drug-Target Interaction Dataset

This dataset contains interactions between drugs and targets collected from DrugBank, KEGG Drug, DCDB, and Matador. It was originally collected by Perlman et al. It contains 315 drugs, 250 targets, 1,306 drug-target interactions, 5 types of drug-drug similarities, and 3 types of target-target similarities. Drug-drug similarities include Chemical-based, Ligand-based, Expression-based, Side-effect-based, and Annotation-based similarities. Target-target similarities include Sequence-based, Protein-protein interaction network-based, and Gene Ontology-based similarities. The original task on the dataset is to predict new interactions between drugs and targets based on different types of similarities in the network. Download link

Pharmacogenomics Datasets

PharmGKB data and knowledge is available as downloads. It is often critical to check with their curators at feedback@pharmgkb.org before embarking on a large project using these data, to be sure that the files and data they make available are being interpreted correctly. PharmGKB generally does NOT need to be a co-author on such analyses; They just want to make sure that there is a correct understanding of our data before lots of resources are spent.

Pancreatic Cancer Organoid Profiling

The dataset contains open RNA-Seq Gene Expression Quantification data and controlled WGS/WXS/RNA-Seq Aligned Reads, WXS Annotated Somatic Mutation, WXS Raw Somatic Mutation, and RNA-Seq Splice Junction Quantification. Documentation

AWS CLI Access (No AWS account required)

aws s3 ls s3://gdc-organoid-pancreatic-phs001611-2-open/ --no-sign-request

Africa Soil Information Service (AfSIS) Soil Chemistry

This dataset contains soil infrared spectral data and paired soil property reference measurements for georeferenced soil samples that were collected through the Africa Soil Information Service (AfSIS) project, which lasted from 2009 through 2018. Documentation

AWS CLI Access (No AWS account required)

aws s3 ls s3://afsis/ --no-sign-request

Dataset for Affective States in E-Environments

DAiSEE is the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration “in the wild”. The dataset has four levels of labels namely – very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. Download it here.

NatureServe Explorer Dataset

NatureServe Explorer provides conservation status, taxonomy, distribution, and life history information for more than 95,000 plants and animals in the United States and Canada, and more than 10,000 vegetation communities and ecological systems in the Western Hemisphere.

The data available through NatureServe Explorer represents data managed in the NatureServe Central Databases. These databases are dynamic, being continually enhanced and refined through the input of hundreds of natural heritage program scientists and other collaborators. NatureServe Explorer is updated from these central databases to reflect information from new field surveys, the latest taxonomic treatments and other scientific publications, and new conservation status assessments. Explore Data here

Flight Records in the US

Airline On-Time Performance and Causes of Flight Delays – On_Time Data.

This database contains scheduled and actual departure and arrival times, reason of delay. reported by certified U.S. air carriers that account for at least one percent of domestic scheduled passenger revenues. The data is collected by the Office of Airline Information, Bureau of Transportation Statistics (BTS).

FlightAware.com has data but you need to pay for a full dataset.

The anyflights package supplies a set of functions to generate air travel data (and data packages!) similar to nycflights13. With a user-defined year and airport, the anyflights function will grab data on:

  • flights: all flights that departed a given airport in a given year and month
  • weather: hourly meterological data for a given airport in a given year and month
  • airports: airport names, FAA codes, and locations
  • airlines: translation between two letter carrier (airline) codes and names
  • planes: construction information about each plane found in flights

Airline On-Time Statistics and Delay Causes

The U.S. Department of Transportation’s (DOT) Bureau of Transportation Statistics (BTS) tracks the on-time performance of domestic flights operated by large air carriers. Summary information on the number of on-time, delayed, canceled and diverted flights appears in DOT’s monthly Air Travel Consumer Report, published about 30 days after the month’s end, as well as in summary tables posted on this website. BTS began collecting details on the causes of flight delays in June 2003. Summary statistics and raw data are made available to the public at the time the Air Travel Consumer Report is released. Access it here

Worldwide flight data

Open flights: As of January 2017, the OpenFlights Airports Database contains over 10,000 airports, train stations and ferry terminals spanning the globe

Download: airports.dat (Airports only, high quality)

Download: airports-extended.dat (Airports, train stations and ferry terminals, including user contributions)

Bureau of Transportation:

Flightera.net seems to have a lot of good data for free. It has in-depth data on flights and doesn’t seem limited by date. I can’t speak on the validity of the data though.

flightradar24.com has lots of data, also historically, they might be willing to help you get it in a nice format.

2019 Crime statistics in the USA

Dataset with arrest in US by race and separate states. Download Excel here

Yahoo Answers DataSets

Yahoo is shutting down in 2021. This is Yahoo Answers datasets (300MB gzip) that is fairly extensive from 2015 with about 1.4m rows. This dataset has the best questions answers, I mean all the answers, including the most insane awful answers and the worst questions people put together. Download it here.

Another option here: According to the tracker, there are 77M done, 20M out(?), and 40M to go:

https://wiki.archiveteam.org/index.php/Yahoo!_Answers

History of America 1400-2021

Sources:

https://os-connect.com/pop/p2an.asp

https://ourworldindata.org/

http://www.ggdc.net/maddison/oriindex.htm

https://www.globalfirepower.com/countries-comparison.asp

Persian words phonetics dataset

This is a dataset of about 55K Persian words with their phonetics. Each word is in a line and separated from its phonetic by a tab. Download it here

Historical Air Quality Dataset

Air Quality Data Collected at Outdoor Monitors Across the US. This is a BigQuery Dataset. There are no files to download, but you can query it through Kernels using the BigQuery API. The AQS Data Mart is a database containing all of the information from AQS. It has every measured value the EPA has collected via the national ambient air monitoring program. It also includes the associated aggregate values calculated by EPA (8-hour, daily, annual, etc.). The AQS Data Mart is a copy of AQS made once per week and made accessible to the public through web-based applications. The intended users of the Data Mart are air quality data analysts in the regulatory, academic, and health research communities. It is intended for those who need to download large volumes of detailed technical data stored at EPA and does not provide any interactive analytical tools. It serves as the back-end database for several Agency interactive tools that could not fully function without it: AirData, AirCompare, The Remote Sensing Information Gateway, the Map Monitoring Sites KML page, etc.

Stack Exchange Dataset

https://data.stackexchange.com/

Awesome Public Datasets

This list of a topic-centric public data sources in high quality. They are collected and tidied from blogs, answers, and user responses. Most of the data sets listed below are free, however, some are not.

Agriculture Dataset

Biology Dataset

Climate and Weather Dataset

Complex Network Dataset

Computer Network Dataset

CyberSecurity Dataset

Data Challenges Dataset

Earth Science Dataset

Economics Dataset

Education Dataset

Energy Dataset

Entertainment Dataset

Finance Dataset

GIS Dataset

Government Dataset

Healthcare Dataset

Image Processing Dataset

Machine Learning Dataset

Museums Dataset

Natural Language Dataset

Neuroscience Dataset

Physics Dataset

Prostate Cancer Dataset

Psychology and Cognition Dataset

Public Domains Dataset

Search Engines Dataset

Social Networks Dataset

Social Sciences Dataset

Software Dataset

Sports Dataset

Time Series Dataset

Transportation Dataset

eSports Dataset

Complementary Collections

Categorized list of public datasets: Sindre Sorhus /awesome List

Platforms

  • Node.js – Async non-blocking event-driven JavaScript runtime built on Chrome’s V8 JavaScript engine.
  • Frontend Development
  • iOS – Mobile operating system for Apple phones and tablets.
  • Android – Mobile operating system developed by Google.
  • IoT & Hybrid Apps
  • Electron – Cross-platform native desktop apps using JavaScript/HTML/CSS.
  • Cordova – JavaScript API for hybrid apps.
  • React Native – JavaScript framework for writing natively rendering mobile apps for iOS and Android.
  • Xamarin – Mobile app development IDE, testing, and distribution.
  • Linux
    • Containers
    • eBPF – Virtual machine that allows you to write more efficient and powerful tracing and monitoring for Linux systems.
    • Arch-based Projects – Linux distributions and projects based on Arch Linux.
  • macOS – Operating system for Apple’s Mac computers.
  • watchOS – Operating system for the Apple Watch.
  • JVM
  • Salesforce
  • Amazon Web Services
  • Windows
  • IPFS – P2P hypermedia protocol.
  • Fuse – Mobile development tools.
  • Heroku – Cloud platform as a service.
  • Raspberry Pi – Credit card-sized computer aimed at teaching kids programming, but capable of a lot more.
  • Qt – Cross-platform GUI app framework.
  • WebExtensions – Cross-browser extension system.
  • RubyMotion – Write cross-platform native apps for iOS, Android, macOS, tvOS, and watchOS in Ruby.
  • Smart TV – Create apps for different TV platforms.
  • GNOME – Simple and distraction-free desktop environment for Linux.
  • KDE – A free software community dedicated to creating an open and user-friendly computing experience.
  • .NET
    • Core
    • Roslyn – Open-source compilers and code analysis APIs for C# and VB.NET languages.
  • Amazon Alexa – Virtual home assistant.
  • DigitalOcean – Cloud computing platform designed for developers.
  • Flutter – Google’s mobile SDK for building native iOS and Android apps from a single codebase written in Dart.
  • Home Assistant – Open source home automation that puts local control and privacy first.
  • IBM Cloud – Cloud platform for developers and companies.
  • Firebase – App development platform built on Google Cloud Platform.
  • Robot Operating System 2.0 – Set of software libraries and tools that help you build robot apps.
  • Adafruit IO – Visualize and store data from any device.
  • Cloudflare – CDN, DNS, DDoS protection, and security for your site.
  • Actions on Google – Developer platform for Google Assistant.
  • ESP – Low-cost microcontrollers with WiFi and broad IoT applications.
  • Deno – A secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
  • DOS – Operating system for x86-based personal computers that was popular during the 1980s and early 1990s.
  • Nix – Package manager for Linux and other Unix systems that makes package management reliable and reproducible.

Programming Languages

  • JavaScript
  • Swift – Apple’s compiled programming language that is secure, modern, programmer-friendly, and fast.
  • Python – General-purpose programming language designed for readability.
    • Asyncio – Asynchronous I/O in Python 3.
    • Scientific Audio – Scientific research in audio/music.
    • CircuitPython – A version of Python for microcontrollers.
    • Data Science – Data analysis and machine learning.
    • Typing – Optional static typing for Python.
    • MicroPython – A lean and efficient implementation of Python 3 for microcontrollers.
  • Rust
  • Haskell
  • PureScript
  • Go
  • Scala
    • Scala Native – Optimizing ahead-of-time compiler for Scala based on LLVM.
  • Ruby
  • Clojure
  • ClojureScript
  • Elixir
  • Elm
  • Erlang
  • Julia – High-level dynamic programming language designed to address the needs of high-performance numerical analysis and computational science.
  • Lua
  • C
  • C/C++ – General-purpose language with a bias toward system programming and embedded, resource-constrained software.
  • R – Functional programming language and environment for statistical computing and graphics.
  • D
  • Common Lisp – Powerful dynamic multiparadigm language that facilitates iterative and interactive development.
  • Perl
  • Groovy
  • Dart
  • Java – Popular secure object-oriented language designed for flexibility to “write once, run anywhere”.
  • Kotlin
  • OCaml
  • ColdFusion
  • Fortran
  • PHP – Server-side scripting language.
  • Pascal
  • AutoHotkey
  • AutoIt
  • Crystal
  • Frege – Haskell for the JVM.
  • CMake – Build, test, and package software.
  • ActionScript 3 – Object-oriented language targeting Adobe AIR.
  • Eta – Functional programming language for the JVM.
  • Idris – General purpose pure functional programming language with dependent types influenced by Haskell and ML.
  • Ada/SPARK – Modern programming language designed for large, long-lived apps where reliability and efficiency are essential.
  • Q# – Domain-specific programming language used for expressing quantum algorithms.
  • Imba – Programming language inspired by Ruby and Python and compiles to performant JavaScript.
  • Vala – Programming language designed to take full advantage of the GLib and GNOME ecosystems, while preserving the speed of C code.
  • Coq – Formal language and environment for programming and specification which facilitates interactive development of machine-checked proofs.
  • V – Simple, fast, safe, compiled language for developing maintainable software.

Front-End Development

Back-End Development

  • Flask – Python framework.
  • Docker
  • Vagrant – Automation virtual machine environment.
  • Pyramid – Python framework.
  • Play1 Framework
  • CakePHP – PHP framework.
  • Symfony – PHP framework.
  • Laravel – PHP framework.
    • Education
    • TALL Stack – Full-stack development solution featuring libraries built by the Laravel community.
  • Rails – Web app framework for Ruby.
    • Gems – Packages.
  • Phalcon – PHP framework.
  • Useful .htaccess Snippets
  • nginx – Web server.
  • Dropwizard – Java framework.
  • Kubernetes – Open-source platform that automates Linux container operations.
  • Lumen – PHP micro-framework.
  • Serverless Framework – Serverless computing and serverless architectures.
  • Apache Wicket – Java web app framework.
  • Vert.x – Toolkit for building reactive apps on the JVM.
  • Terraform – Tool for building, changing, and versioning infrastructure.
  • Vapor – Server-side development in Swift.
  • Dash – Python web app framework.
  • FastAPI – Python web app framework.
  • CDK – Open-source software development framework for defining cloud infrastructure in code.
  • IAM – User accounts, authentication and authorization.
  • Chalice – Python framework for serverless app development on AWS Lambda.

Computer Science

Big Data

  • Big Data
  • Public Datasets
  • Hadoop – Framework for distributed storage and processing of very large data sets.
  • Data Engineering
  • Streaming
  • Apache Spark – Unified engine for large-scale data processing.
  • Qlik – Business intelligence platform for data visualization, analytics, and reporting apps.
  • Splunk – Platform for searching, monitoring, and analyzing structured and unstructured machine-generated big data in real-time.

Theory

Books

Editors

Gaming

Development Environment

Entertainment

Databases

  • Database
  • MySQL
  • SQLAlchemy
  • InfluxDB
  • Neo4j
  • MongoDB – NoSQL database.
  • RethinkDB
  • TinkerPop – Graph computing framework.
  • PostgreSQL – Object-relational database.
  • CouchDB – Document-oriented NoSQL database.
  • HBase – Distributed, scalable, big data store.
  • NoSQL Guides – Help on using non-relational, distributed, open-source, and horizontally scalable databases.
  • Contexture – Abstracts queries/filters and results/aggregations from different backing data stores like ElasticSearch and MongoDB.
  • Database Tools – Everything that makes working with databases easier.
  • Grakn – Logical database to organize large and complex networks of data as one body of knowledge.

Media

Learn

Security

Content Management Systems

  • Umbraco
  • Refinery CMS – Ruby on Rails CMS.
  • Wagtail – Django CMS focused on flexibility and user experience.
  • Textpattern – Lightweight PHP-based CMS.
  • Drupal – Extensible PHP-based CMS.
  • Craft CMS – Content-first CMS.
  • Sitecore – .NET digital marketing platform that combines CMS with tools for managing multiple websites.
  • Silverstripe CMS – PHP MVC framework that serves as a classic or headless CMS.

Hardware

Business

Work

Networking

Decentralized Systems

  • Bitcoin – Bitcoin services and tools for software developers.
  • Ripple – Open source distributed settlement network.
  • Non-Financial Blockchain – Non-financial blockchain applications.
  • Mastodon – Open source decentralized microblogging network.
  • Ethereum – Distributed computing platform for smart contract development.
  • Blockchain AI – Blockchain projects for artificial intelligence and machine learning.
  • EOSIO – A decentralized operating system supporting industrial-scale apps.
  • Corda – Open source blockchain platform designed for business.
  • Waves – Open source blockchain platform and development toolset for Web 3.0 apps and decentralized solutions.
  • Substrate – Framework for writing scalable, upgradeable blockchains in Rust.

Higher Education

  • Computational Neuroscience – A multidisciplinary science which uses computational approaches to study the nervous system.
  • Digital History – Computer-aided scientific investigation of history.
  • Scientific Writing – Distraction-free scientific writing with Markdown, reStructuredText and Jupyter notebooks.

Events

Testing

  • Testing – Software testing.
  • Visual Regression Testing – Ensures changes did not break the functionality or style.
  • Selenium – Open-source browser automation framework and ecosystem.
  • Appium – Test automation tool for apps.
  • TAP – Test Anything Protocol.
  • JMeter – Load testing and performance measurement tool.
  • k6 – Open-source, developer-centric performance monitoring and load testing solution.
  • Playwright – Node.js library to automate Chromium, Firefox and WebKit with a single API.
  • Quality Assurance Roadmap – How to start & build a career in software testing.

Miscellaneous

Related

US Department of Education CRDC Dataset

The US Department of Ed has a dataset called the CRDC that collects data from all the public schools in the US and has demographic, academic, financial and all sorts of other fun data points. They also have corollary datasets that use the same identifier—an expansion pack if you may. It comes out every 2-3 years. Access it here

Nasa Dataset: sequencing data from bacteria before and after being taken to space

NASA has some sequencing data from bacteria before and after being taken to space, to look at genetic differences caused by lack of gravity, radiation and others. Very fun if you want to try your hand at some bio data science. Access it here.

All Trump’s twitter insults from 2015 to 2021 in CSV.

Extracted from the NYT story: here

Data is plural

Data is Plural is a really good newsletter published by Jeremy Singer-Vine. The datasets are very random, but super interesting. Access it here.

Global terrorism database

 Huge list of terrorism incidents from inside the US and abroad. Each entry has date and location of the incident, motivations, whether people or property were lost, the size of the attack, type of attack, etc. Access it here

Terrorist Attacks Dataset: This dataset consists of 1293 terrorist attacks each assigned one of 6 labels indicating the type of the attack. Each attack is described by a 0/1-valued vector of attributes whose entries indicate the absence/presence of a feature. There are a total of 106 distinct features. The files in the dataset can be used to create two distinct graphs. The README file in the dataset provides more details. Download Link:

Terrorists: This dataset contains information about terrorists and their relationships. This dataset was designed for classification experiments aimed at classifying the relationships among terrorists. The dataset contains 851 relationships, each described by a 0/1-valued vector of attributes where each entry indicates the absence/presence of a feature. There are a total of 1224 distinct features. Each relationship can be assigned one or more labels out of a maximum of four labels making this dataset suitable for multi-label classification tasks. The README file provides more details. Download Link

The dolphin social network

This network dataset is in the category of Social Networks. A social network of bottlenose dolphins. The dataset contains a list of all of links, where a link represents frequent associations between dolphins. Access it here

Dataset of 200,000 jokes

There are about 208 000 jokes in this database scraped from three sources.

Access it here:

The Million Song Dataset

The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks.

Its purposes are:

  • To encourage research on algorithms that scale to commercial sizes
  • To provide a reference dataset for evaluating research
  • As a shortcut alternative to creating a large dataset with APIs (e.g. The Echo Nest’s)
  • To help new researchers get started in the MIR field

Cornell University’s eBird dataset

Decades of observations of birds all around the world, truly an impressive way to leverage citizen science. Access it here.

UFO Report Dataset

NUFORC geolocated and time standardized ufo reports for close to a century of data. 80,000 plus reports. Access it here

CDC’s Trend Drug Data

The CDC has a public database called NAMCS/NHAMCS that allows you to trend drug data. It has a lot of other data points so it can be used for a variety of other reasons. Access it here.

Health and Retirement study: Public Survey data

A listing of publicly available biennial, off-year, and cross-year data products.

Example: COVID-19 Data

Year Product
2020 2020 HRS COVID-19 Project

RAND HRS Data

HRS data products produced by the RAND Center for the Study of Aging.

Gateway Harmonized Data

HRS data products produced by the USC Program on Global Aging, Health, and Policy.

Contributed and Replication Data

Data products (unsupported by the HRS) provided by researchers sharing their work.

Restricted/Sensitive Data

Cognition Data

A summary of HRS cognition data, including the new Harmonized Cognition Assessment Protocol (HCAP.)

Biomarker and Health Data

Sensitive health data files available are from the public data portal after a supplemental agreement is signed.

Restricted Data

HRS restricted data files require a detailed application process, and are available only through remote virtual desktop or encrypted physical media.

Administrative Linkages

Links HRS data with Medicare and Social Security.

Genetic Data

Genetic data products derived from 20,000 genotyped HRS respondents.

The Quick Draw Dataset

The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. Access it here.

Air Quality Dataset

The AirNow API replaces the previous AirNow Gateway web services. It includes file outputs and RSS data feeds. AirNow Gateway users can use their existing login information to access the new AirNow API web pages and web services. Access to the AirNow API is generally available to the public, and new accounts can be acquired via the Log In page

UK Water Industry Chemical Investigations dataset

Search and extract the measurements from 600 Wastewater Treatment Sites owned and operated by UK Water Companies and part of the Chemical Investigations Programme (CIP2).

M3 and M4 Dataset Time Series Data

The 3003 time series of the M3-Competition.

The M4 competition which is a continuation of the Makridakis Competitions for forecasting and was conducted in 2018. This competion includes the prediction of both Point Forecasts and Prediction Intervals.

Protein Data Bank (PDB)

Used by Google’s deep-learning program for determining the 3D shapes of proteins stands to transform biology, say scientists. Access it here.

Dataset of Games

In computer science, Artificial Intelligence (AI) is intelligence demonstrated by machines. Its definition, AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that achieving its goals Russell et. al (2016).

Withal, Data Mining (DM) is the process of discovering patterns in data sets (or datasets) involving methods of machine learning, statistics, and database systems; DM focus on extract the information of datasets Han (2011).

This repository serves as a guide for anyone who wants to work with Artificial Intelligence or Data Mining applied in digital games! Here you will find a series of datasets, tools and materials available to build your application or dataset. Access it here.

DonorsChoose.org Application Screening DataSet

Help Predict whether teachers’ project proposals are accepted

Dataset of all the squirrels in Central Park

The Squirrel Census is a multimedia science, design, and storytelling project focusing on the Eastern gray (Sciurus carolinensis). They count squirrels and present their findings to the public.

Google BigQuery Public Datasets

BigQuery public datasets are made available without any restrictions to all Google Cloud users. Google pays for the storage of these datasets. You can use them to learn how to work with BigQuery or even build your application on top of them, exactly as we’re going to do.

IMDb Dataset

IMDb dataset importer – loads into a Marten DB document store. It imports the public datasets into a database, and provides repositories for querying. The total imported size is about 40 million rows, and 14 gigabytes on disk!

PHOnA: A Public Dataset of Measured Headphone Transfer Functions

A dataset of measured headphone transfer functions (HpTFs), the Princeton Headphone Open Archive (PHOnA), is presented. Extensive studies of HpTFs have been conducted for the past twenty years, each requiring a separate set of measurements, but this data has not yet been publicly shared. PHOnA aggregates HpTFs from different laboratories, including measurements for multiple different headphones, subjects, and repositionings of headphones for each subject. The dataset uses the spatially oriented format for acoustics (SOFA), and SOFA conventions are proposed for efficiently storing HpTFs. PHOnA is intended to provide a foundation for machine learning techniques applied to HpTF equalization. This shared data will allow optimization of equalization algorithms to provide more universal solutions to perceptually transparent headphone reproduction. Access it here.

Sports Data Set

Provide both basic and sabermetric statistics and resources for sports fans everywhere. Access here

Kaggle DataSets

Explore, analyze, and share quality data here

Coronavirus Datasets

Spreadsheets and Datasets:

Natural History Museum in London

The Natural History Museum in London has 80 million items (and counting!) in its collections, from the tiniest specks of stardust to the largest animal that ever lived – the blue whale. 

The Digital Collections Programme is a project to digitise these specimens and give the global scientific community access to unrivalled historical, geographic and taxonomic specimen data gathered in the last 250 years. Mobilising this data can facilitate research into some of the most pressing scientific and societal challenges.

Digitising involves creating a digital record of a specimen which can consist of all types of information such as images, and geographical and historical information about where and when a specimen was collected. The possibilities for digitisation are quite literally limitless – as technology evolves, so do possible uses and analyses of the collections. We are currently exploring how machine learning and automation can help us capture information from specimen images and their labels.

With such a wide variety of specimens, digitising looks different for every single collection. How we digitise a fly specimen on a microscope slide is very different to how we might digitise a bat in a spirit jar! We develop new workflows in response to the type of specimens we are dealing with. Sometimes we have to get really creative, and have even published on workflows which have involved using pieces of LEGO to hold specimens in place while we are imaging them.

Mobilising this data and making it open access is at the heart of the project. All of the specimen data is released on our Data Portal, and we also feed the data into international databases such as GBIF.

TSA Throughput Dataset (alternate source)

The TSA has is publishing more and more data via it’s Freedom of Information Act (FOIA) Reading Room.  This project on github https://github.com/mikelor/tsathroughput  contains the source for extracting the information from the .PDF files and converts them to JSON and CSV files.

The /data folder contains the source .PDFs going back to 2018 while the /data/raw/tsa/throughput folder contains .json files.

Data Planet

The largest repository of standardized and structured statistical data

https://statisticaldatasets.data-planet.com/

Chess datasets

3.5 Million Chess Games

ML Dataset to practice methods of regression

Center for Machine Learning and Intelligent Systems

585 Data Sets

 

ManyTypes4Py: A benchmark Python Dataset for Machine Learning-Based Type Inference

  • The dataset is gathered on Sep. 17th 2020 from GitHub.
  • It has more than 5.2K Python repositories and 4.2M type annotations.
  • Use it to train  ML-based type inference model for Python
  • Access it here

Quadrature magnetoresistance in overdoped cuprates

Measurements of the normal (i.e. non-superconducting) state magnetoresistance (change in resistance with magnetic field) in several single crystalline samples of copper-oxide high-temperature superconductors. The measurements were performed predominantly at the High Field Magnet Laboratory (HFML) in Nijmegen, the Netherlands, and the Pulsed Magnetic Field Facility (LNCMI-T) in Toulouse, France. Complete Zip Download

The UMA-SAR Dataset: Multimodal data collection from a ground vehicle during outdoor disaster response training exercises

Collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. Full Dataset.

Child Mortality from Malaria

Child mortality numbers caused by malaria by country

Number of deaths of infants, neonatal, and children up to 4 years old caused by malaria by country from 2000 to 2015. Originator: World Health Organization

https://datarepository.wolframcloud.com/resources/Child-Mortality-Numbers-by-Malaria-2015

Quora Question Pairs at Data.world

The dataset  will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair. Access it here.

MIMIC Critical Care Database

MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising deidentified health data associated with ~60,000 intensive care unit admissions. It includes demographics, vital signs, laboratory tests, medications, and more. Access it here.

Data.Gov: The home of the U.S. Government’s open data

Here you will find data, tools, and resources to conduct research, develop web and mobile applications, design data visualizations, and more. Search over 280000 Datasets.

Tidy Tuesday Dataset

TidyTuesday is built around open datasets that are found in the “wild” or submitted as Issues on our GitHub.

US Census Bureau: QuickFacts Dataset

QuickFacts provides statistics for all states and counties, and for cities and towns with a population of 5,000 or more.

Classical Abstract Art Dataset

Art that does not attempt to represent an accurate depiction of a visual reality but instead use shapes, colours, forms and gestural marks to achieve its effect

5000+ classical abstract art here, real artists with annotation. You can download them in very high resolution,  however you would have to crawl them first  with this scraper.

Interactive map of indigenous people around the world

Native-Land.ca is a website run by the nonprofit organization Native Land Digital. Access it here.

Data Visualization: A Wordcloud for each of the Six Largest Religions and their Religious Texts (Judaism, Christianity, and Islam; Hinduism, Buddhism, and Sikhism)

Highest altitude humans have been each year since 1961

DataOhio

Over 200+ public datasets, including COVID data. Access it here.

Ohio Data, Ohio Insights. The DataOhio catalog is a single source for the most critical and relevant datasets from state agencies and entities.

https://data.ohio.gov/wps/portal/gov/data/view/view-all

National Household Travel Survey (US)

Conducted by the Federal Highway Administration (FHWA), the NHTS is the authoritative source on the travel behavior of the American public. It is the only source of national data that allows one to analyze trends in personal and household travel. It includes daily non-commercial travel by all modes, including characteristics of the people traveling, their household, and their vehicles. Access it here.

National Travel Survey (UK)

Statistics and data about the National Travel Survey, based on a household survey to monitor trends in personal travel.

The survey collects information on how, why, when and where people travel as well as factors affecting travel (e.g. car availability and driving license holding).

National Travel Survey data tables UK
National Travel Survey data tables UK

National Travel Survey (NTS)[Canada]

Monthly Railway Carloadings: Interactive Dashboard
Monthly Railway Carloadings: Interactive Dashboard

ENTUR: NeTEx or GTFS datasets [Norway]

NeTEx is the official format for public transport data in Norway and is the most complete in terms of available data. GTFS is a downstream format with only a limited subset of the total data, but we generate datasets for it anyway since GTFS can be easier to use and has a wider distribution among international public transport solutions. GTFS sets come in “extended” and “basic” versions. Access here.

The Swedish National Forest Inventory

A subset of the field data collected on temporary NFI plots can be downloaded in Excel format from this web site. The file includes a Read_me sheet and a sheet with field data from temporary plots on forest land1 collected from 2007 to 2019. Note that plots located on boundaries (for example boundaries between forest stands, or different land use classes) are not included in the dataset. The dataset is primarily intended to be used as reference data and validation data in remote sensing applications. It cannot be used to derive estimates of totals or mean values for a geographic area of any size. Download the dataset here

Large data sets from finance and economics applicable in related fields studying the human condition

World Bank Data: Countries Data | Topics Data | Indicators Data | Catalog

US Federal Statistics

Boards of Governors of the Federal Reserve: Data Download Program

CIA: The world Factbook provides basic intelligence on the history, people, government, economy, energy, geography, environment, communications, transportation, military, terrorism, and transnational issues for 266 world entities.

Human Development Report: United Nations Development Programme – Public Data Explorer

Consumer Price Index: The Consumer Price Index (CPI) is a measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services. Indexes are available for the U.S. and various geographic areas. Average price data for select utility, automotive fuel, and food items are also available.

Gapminder.org: Unveiling the beauty of statistics for a fact based world view Watch everyday life in hundreds of homes on all income levels across the world, to counteract the media’s skewed selection of images of other places.

Our world in Data: International Trade

Research and data to make progress against the world’s largest problems: 3139 charts across 297 topics, All free: open access and open source.

International Historical Statistics (by Brian Mitchell)

 
International Historical Statistics is a compendium of national and international socio-economic data from 1750 to 2010. Data are available in both Excel and PDF tabular formats. IHS is structured in three broad geographical divisions and ten themes: Africa / Asia / Oceania; The Americas and Europe. The database is structured in ten categories: Population and vital statistics; Labour force; Agriculture; Industry; External trade; Transport and communications; Finance; Commodity prices; Education and National accounts. Access here

World Input-Output Database

World Input-Output Tables and underlying data. World Input-Output Tables and underlying data, covering 43 countries, and a model for the rest of the world for the period 2000-2014. Data for 56 sectors are classified according to the International Standard Industrial Classification revision 4 (ISIC Rev. 4).

  • Data: Real and PPP-adjusted GDP in US millions of dollars, national accounts (household consumption, investment, government consumption, exports and imports), exchange rates and population figures.
  • Geographical coverage: Countries around the world
  • Time span: from 1950-2011 (version 8.1)
  • Available at: Online

Correlates of War Bilateral Trade

COW seeks to facilitate the collection, dissemination, and use of accurate and reliable quantitative data in international relations. Key principles of the project include a commitment to standard scientific principles of replication, data reliability, documentation, review, and the transparency of data collection procedures

  • Data: Total national trade and bilateral trade flows between states. Total imports and exports of each country in current US millions of dollars and bilateral flows in current US millions of dollars
  • Geographical coverage: Single countries around the world
  • Time span: from 1870-2009
  • Available at: Online here
  • This data set is hosted by Katherine Barbieri, University of South Carolina, and Omar Keshk, Ohio State University.

World Bank Open Data – World Development Indicators

Free and open access to global development data. Access it here.

World Trade Organization – WTO

The WTO provides quantitative information in relation to economic and trade policy issues. Its data-bases and publications provide access to data on trade flows, tariffs, non-tariff measures (NTMs) and trade in value added.

  • Data: Many series on tariffs and trade flows
  • Geographical coverage: Countries around the world
  • Time span: Since 1948 for some series
  • Available at: Online here
WTO - World Trade Organization
WTO – World Trade Organization

SMOKA Science Archive

The Subaru-Mitaka-Okayama-Kiso Archive, holds about 15 TB of astronomical data from facilities run by the National Astronomical Observatory of Japan. All data becomes publicly available after an embargo period of 12-24 months (to give the original observers time to publish their papers).

Graph Datasets

Multi-Domain Sentiment Dataset

The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains). Some domains (books and dvds) have hundreds of thousands of reviews. Others (musical instruments) have only a few hundred. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed. Access it here.

A Global Database of Society

Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, themes, sources, emotions, counts, quotes, images and events driving our global society every second of every day, creating a free open platform for computing on the entire world.

The Yahoo News Feed: Ratings and Classification Data

Dataset is 1.5 TB compressed, 13.5 TB uncompressed

Yahoo! Music User Ratings of Musical Artists, version 1.0 (423 MB)

This dataset represents a snapshot of the Yahoo! Music community’s preferences for various musical artists. The dataset contains over ten million ratings of musical artists given by Yahoo! Music users over the course of a one month period sometime prior to March 2004. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms. The dataset may serve as a testbed for matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 423 MB.
 

Yahoo! Movies User Ratings and Descriptive Content Information, v.1.0 (23 MB)

This dataset contains a small sample of the Yahoo! Movies community’s preferences for various movies, rated on a scale from A+ to F. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset also contains a large amount of descriptive information about many movies released prior to November 2003, including cast, crew, synopsis, genre, average ratings, awards, etc. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms, including hybrid content and collaborative filtering algorithms. The dataset may serve as a testbed for relational learning and data mining algorithms as well as matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 23 MB.
 

Yahoo News Video dataset, version 1.0 (645MB)

The dataset is a collection of 964 hours (22K videos) of news broadcast videos that appeared on Yahoo news website’s properties, e.g., World News, US News, Sports, Finance, and a mobile application during August 2017. The videos were either part of an article or displayed standalone in a news property. Many of the videos served in this platform lack important metadata, such as an exhaustive list of topics associated with the video. We label each of the videos in the dataset using a collection of 336 tags based on a news taxonomy designed by in-house editors. In the taxonomy, the closer the tag is to the root, the more generic (topically) it is.
etc…

Other Datasets

More than 1 TB

  • The 1000 Genomes project makes 260 TB of human genome data available
  • The Internet Archive is making an 80 TB web crawl available for research 
  • The TREC conference made the ClueWeb09 [3] dataset available a few years back. You’ll have to sign an agreement and pay a nontrivial fee (up to $610) to cover the sneakernet data transfer. The data is about 5 TB compressed.
  • ClueWeb12  is now available, as are the Freebase annotations, FACC1 
  • CNetS at Indiana University makes a 2.5 TB click dataset available 
  • ICWSM made a large corpus of blog posts available for their 2011 conference. You’ll have to register (an actual form, not an online form), but it’s free. It’s about 2.1 TB compressed. The dataset consists of over 386 million blog posts, news articles, classifieds, forum posts and social media content between January 13th and February 14th. It spans events such as the Tunisian revolution and the Egyptian protests (see http://en.wikipedia.org/wiki/January_2011 for a more detailed list of events spanning the dataset’s time period). Access it here
  • The Yahoo News Feed dataset is 1.5 TB compressed, 13.5 TB uncompressed
  • The Proteome Commons makes several large datasets available. The largest, the Personal Genome Project , is 1.1 TB in size. There are several others over 100 GB in size.

More than 1 GB

  • The Reference Energy Disaggregation Data Set  has data on home energy use; it’s about 500 GB compressed.
  • The Tiny Images dataset  has 227 GB of image data and 57 GB of metadata.
  • The ImageNet dataset  is pretty big.
  • The MOBIO dataset  is about 135 GB of video and audio data
  • The Yahoo! Webscope program makes several 1 GB+ datasets available to academic researchers, including an 83 GB data set of Flickr image features and the dataset used for the 2020 KDD Cup , from Yahoo! Music, which is a bit over 1 GB.
  • Freebase makes regular data dumps available. The largest is their Quad dump , which is about 3.6 GB compressed.
  • Wikipedia made a dataset containing information about edits available for a recent Kaggle competition [6]. The training dataset is about 2.0 GB uncompressed.
  • The Research and Innovative Technology Administration (RITA) has made available a dataset about the on-time performance of domestic flights operated by large carriers. The ASA compressed this dataset and makes it available for download.
  • The wiki-links data made available by Google is about 1.75 GB total.
  • Google Research released a large 24GB n-gram data set back in 2006 based on processing 10^12 words of text and published counts of all sequences up to 5 words in length.

Power and Energy Consumption Open Datasets

These data are intended to be used by researchers and other professionals working in power and energy related areas and requiring data for design, development, test, and validation purposes. These data should not be used for commercial purposes.

The Million Playlist Dataset (Spotify)

A dataset and open-ended challenge for music recommendation research ( RecSys Challenge 2018). Sampled from the over 4 billion public playlists on Spotify, this dataset of 1 million playlists consist of over 2 million unique tracks by nearly 300,000 artists, and represents the largest public dataset of music playlists in the world. Access it here

Regression Analysis Cheat Sheet

Hotel Reviews Dataset from Yelp

20k+ Hotel Reviews from Yelp for 5 Star Hotels in Las Vegas.

This dataset can be used for the following applications and more:

Analyzing trends,  Sentiment Analysis / Opinion Mining, Sentiment Analysis / Opinion Mining, Competitor Analysis. Access it here.

A truncated version with 500 reviews is also available on Kaggle here

Motorcycle Crash data

1- Texas: Perform specific queries and analysis using Texas traffic crash data.

2- BTS: Motorcycle Rider Safety Data

3- National Transportation Safety Board: US Transportation Fatalities in 2019

4- Fatal single vehicle motorcycle crashes

5- Motorcycle crash causes and outcomes : pilot study

6- Motorcycle Crash Causation Study: Final Report

Download a collection of news articles relating to natural disasters over an eight-month period. Access it here.

World Population Data by Country and Age Group

1- WorldoMeter: Countries in the world by population (2021)

2- Worldometer: Current World Population Live

Investment-Related Dataset with both Qualitative and Quantitative Variables

1- Numer.ai:  Anonymized and feature normalized financial data which is interesting for machine learning applications. Download here

2- Snowflake Data Marketplace: Snowflake Data Marketplace gives data scientists, business intelligence and analytics professionals, and everyone who desires data-driven decision-making, access to more than 375 live and ready-to-query data sets from more than 125 third-party data providers and data service providers

3- Quandl: The premier source for financial, economic and alternative datasets, serving investment professionals.

National Obesity Monitor

The National Health and Nutrition Examination Survey (NHANES) is conducted every two years by the National Center for Health Statistics and funded by the Centers for Disease Control and Prevention. The survey measures obesity rates among people ages 2 and older. Find the latest national data and trends over time, including by age group, sex, and race. Data are available through 2017-2018, with the exception of obesity rates for children by race, which are available through 2015-2016. Access here

State of Childhood Obesity
State of Childhood Obesity

The World’s Nations by Fertility Rate 2021

The world nation 's fertility rates
The world’s nations fertility rates

Total number of deaths due to Covid19 vis-à-vis Population in million

Total number of deaths due to Covid19 vis-à-vis Population in million
Total number of deaths due to Covid19 vis-à-vis Population in million

Google searches for different emotions during each hour of the day and night

Google searches for different emotions during each hour of the day and night
Google searches for different emotions during each hour of the day and night

Where do the world’s CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions

Where do the world's CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions
Where do the world’s CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions

Global Linguistic Diversity

Global Linguistic Diversity
Global Linguistic Diversity

Where in the world are the densest forests? Darker areas represent higher density of trees.

Where in the world are the densest forests? Darker areas represent higher density of trees.
Where in the world are the densest forests? Darker areas represent higher density of trees.

Likes and Dislikes per movie genre

Like and Dislike per movie genre
Like and Dislike per movie genre

Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset

NCEI first developed the Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset in the early 1990s. Subsequent iterations include version 2 in 1997, version 3 in May 2011, and version 4 in October 2018.

Are there any places where the climate is recently getting colder?
Are there any places where the climate is recently getting colder?

Python Cheat Sheet

Python Beginners Cheat Sheet

Data Sciences Cheat Sheet

Data Sciences Cheat Sheet

Panda Cheat Sheet

Pandas Cheat Sheet

Electric power consumption (kWh per capita)

The World’s Most Eco-Friendly Countries

Alternate Source from Wikipedia : List of countries by carbon dioxide emissions per capita

List of countries by carbon dioxide emissions per capita
List of countries by carbon dioxide emissions per capita

Worldwide CO2 Emission
Worldwide CO2 Emission

Alcohol-Impaired Driving Deaths by State & County [US]

Alcohol Impaired Driving by State
Alcohol Impaired Driving by State

Alcohol Impaired driving by counties
Alcohol Impaired driving by county

% change in life expectancy from 2020 to 2021 across the globe

% change in life expectancy from 2020 to 2021 across the globe
% change in life expectancy from 2020 to 2021 across the globe

This is how life expectancy is calculated.

How Many Years Till the World’s Reserves Run Out of Oil?

How Many Years Till the World's Reserves Run Out of Oil?
How Many Years Till the World’s Reserves Run Out of Oil?

Data Source Here: Note that these values can change with time based on the discovery of new reserves, and changes in annual production.

Which energy source has the least disadvantages?

How many People Did Nuclear Energy Kill?

Here’s a paper on the wind fatalities

https://www.ipcc.ch/site/assets/uploads/2018/02/07_figure_7.7-813×1024.png

Human development index (HDI) by world subdivisions

Human development index (HDI) by world subdivisions
Human development index (HDI) by world subdivisions

The Human Development Index (HDI) is a statistic composite index of life expectancy, education (mean years of schooling completed and expected years of schooling upon entering the education system), and per capita income indicators, which are used to rank countries into four tiers of human development.

Data sourcesubnational human development index website 

US Streaming Services Market Share, 2020 vs 2021

US Streaming Services Market Share, 2020 vs 2021
US Streaming Services Market Share, 2020 vs 2021

Number of tweets deleted by month

Number of tweets deleted by month in 2020
Number of tweets deleted by month in 2020

Tweet Deleter

Football/Soccer Leagues with the fairest distributions of money have seen the most growth in long-term global interest.

Football Leagues with the fairest distributions of money have seen the most growth in long-term global interest.
Football Leagues with the fairest distributions of money have seen the most growth in long-term global interest.

How Much Does Your Favorite Fast Food Brand Spend on Ads?

Sources:

https://www.statista.com/statistics/286541/mcdonald-s-advertising-spending-worldwide/

https://www.statista.com/statistics/306676/ad-spend-subway-usa/

https://www.statista.com/statistics/308930/dominos-pizza-advertising-spending-usa/

https://www.statista.com/statistics/306690/ad-spend-wednys-usa/

https://www.statista.com/statistics/306694/ad-spend-burger-king-usa/

https://www.statista.com/statistics/1072559/advertising-expense-chick-fil-a/

https://www.statista.com/statistics/275195/starbucks-advertising-spending-in-the-us

Historical population count of Western Europe

[OC] Historical population count of Western Europe from dataisbeautiful

Results from survey on how to best reduce your personal carbon footprint

Results from survey on how to best reduce your personal carbon footprint
Results from survey on how to best reduce your personal carbon footprint

Data from IpsosMori

Where does the world’s non-renewable energy come from? 

r/dataisbeautiful - Where does the world's non-renewable energy come from? Zoom in to see a point for each power plant! [OC]

The data comes from the Global Power Plant Database. The Global Power Plant Database is a comprehensive, open source database of power plants around the world. It centralizes power plant data to make it easier to navigate, compare and draw insights for one’s own analysis. The database covers approximately 30,000 power plants from 164 countries and includes thermal plants (e.g. coal, gas, oil, nuclear, biomass, waste, geothermal) and renewables (e.g. hydro, wind, solar). Each power plant is geolocated and entries contain information on plant capacity, generation, ownership, and fuel type. It will be continuously updated as data becomes available.

Recorded Music Industry Revenues from 1997 to 2020

[OC] Recorded Music Industry Revenues from 1997 to 2020 from dataisbeautiful

Source: https://www.riaa.com/

US Trade Surpluses and Deficits by Country (2020)

Facebook Monthly Active Users

Facebook data is based on the end of year from 2004 to 2020

Facebook monthly active users

Source: SeeMetrics.com

Heat map of the past 50,000 earthquakes pulled from USGS sorted by magnitude

[OC] This is a heat map of the past 50,000 earthquakes pulled from USGS sorted by magnitude. from dataisbeautiful

Source:  USGS website

Where do the world’s methane (CH4)emissions come from?

Darker areas indicate areas with higher emissions.

Where do the world’s methane (CH4)emissions come from? Darker areas indicate areas with higher emissions. [OC] from dataisbeautiful

Source: Data comes from EDGARv5.0 website and Crippa et al. (2019)

Earth Surface Albedo (1950 to 2020)

Data Source: ECMWF ERA5

Wealth of Forbes’ Top 100 Billionaires vs All Households in Africa

Sources:
Forbes’ 35th Annual World’s Billionaires List
Credit Suisse Global Wealth Report 2020
United Nations World Population Prospects

Forbes Billionaires list

United nations world population prospects

Credit Suisse Global Wealth Report 2020

20 years of Apple sales in a minute

Source: Apple’s quarterly and annual financial filings with the SEC over the last 20 years

Source: Wikipedia

Racial Diversity of Each State (Based on US Census 2019 Estimates)

r/dataisbeautiful - [OC] Racial Diversity of Each State (Based on US Census 2019 Estimates)

Computation:

Suppose your state is 60% orc, 30% undead, and 10% tauren. You chance in a random selection of two being of the same race is as follows:

  • 36% chance ((60%)2) of two orcs

  • 9% chance ((30%)2) of two undead

  • 1% chance ((10%)2) of two tauren

For a total of 46%. The diversity index would be 100% minus that, or 54%.

Race and Ethnicity in the US

A curated, daily feed of newly published datasets in machine learning

Machine Learning: CIFAR-10 Dataset

A curated, daily feed of newly published datasets in machine learning

The CIFAR-10 dataset consists of 60000 32×32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

Machine Learning: ImageNet

The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images.

Machine Learning: The MNIST Database of Handwritten Digits

The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. Access it here.

The Massively Multilingual Image Dataset (MMID)

MMID is a large-scale, massively multilingual dataset of images paired with the words they represent collected at the University of Pennsylvania. The dataset is doubly parallel: for each language, words are stored parallel to images that represent the word, and parallel to the word’s translation into English (and corresponding images.) . Dcumentation.

AWS CLI Access (No AWS account required)

aws s3 ls s3://mmid-pds/ --no-sign-request

AWS Azure Google Cloud Cloud Certification Exam Prep App
AWS Azure Google Cloud Cloud Certification Exam Prep App: AWS CCP Cloud Practitioner CLF-C01, AWS Solution Architect Associate SAA-C02, AWS Developer Associate DEV-C01, AWS DAS-C01, Azure Fundamentals AZ900, Azure Administrator AZ104, Google Associate Cloud Engineer, AWS Specialty Data Analytics DAS-C01, AWS and Google Professional Machine Learning Specialty MLS-C01

Capitol insurrection arrests per million people by state

[OC] Capitol insurrection arrests per million people by state from dataisbeautiful

How have cryptocurrencies done during the Pandemic?

[OC] How have cryptocurrencies done during the Pandemic? from dataisbeautiful

Data Source: Downloaded performance data on these cryptocurrencies from Investing.com which provides free historic data

Share of US Wealth by Generation

r/dataisbeautiful - Share of US Wealth by Generation [OC]

Source: US Federal Reserve

Top 100 Cryptocurrencies by Market Cap

Top 100 Cryptocurrencies by Market Cap

Data Source from https://coinmarketcap.com/

 Crypto race: DOGE vs BTC, last 365 days

[OC] Crypto race: DOGE vs BTC, last 365 days (now with axes and % gain annotated) from dataisbeautiful

Data sources: Coindesk BTC, Coindesk Dodge

 Yearly Performance of TOP 100 cryptocurrencies
Yearly Performance of TOP 100 cryptocurrencies

What if you bought $100 worth of X a year ago? [OC] from dataisbeautiful

12,000 years of human population dynamics

[OC] 12,000 years of human population dynamics v2.0 – slower & more frames from dataisbeautiful

Countries with a higher Human Development Index (HDI) than the European Union (EU)

HDI is calculated by the UN every year to measure a country’s development using average life expectancy, education level, and gross national income per capita (PPP). The EU has a collective HDI of 0.911.

Data Source: Here

Countries with a higher Human Development Index (HDI) than the United States (US)

Data source: Human Development Report 2020

Child marriage by country, by gender

Data on the percentage of children married before reaching adulthood (18 years).

Data source The State of the World’s Children 2019

 

Wars with greater than 25,000 deaths by year

[OC]Modern wars with greater than 25,000 deaths by year from dataisbeautiful

Data Source : Wikipedia

Population Projection for China and India till 2050

This graphic shows India’s population overtaking China
This graphic shows India’s population overtaking China

Data Source: Here

Relative cumulative and per capita CO2 emissions 1751-2017

 

Relative cumulative and per capita CO2 emissions 1751-2017
Relative cumulative and per capita CO2 emissions 1751-2017

Dat Source: https://ourworldindata.org

Formula 1 Cumulative Wins by Team (1950-2021)

[OC] – Formula 1 Cumulative Wins by Team (1950-2021) from dataisbeautiful

Data Source : https://www.f1-fansite.com/f1-results/

Countries with the most nuclear warheads. A couple of days ago I posted this with a logarithmic scale.

[OC] Countries with the most nuclear warheads. A couple of days ago I posted this with a logarithmic scale. A lot of people thought that was confusing, here is the linear scale. from dataisbeautiful

Data source: Wikipedia

Using machine learning methods to group NFL quarterbacks into archetypes

Using machine learning methods to group NFL quarterbacks into archetypes
Using machine learning methods to group NFL quarterbacks into archetypes

Data Source:

Data collected from a  series of rushing and passing statistics for NFL Quarterbacks from 2015-2020 and performed a machine learning algorithm called clustering, which automatically sorts observations into groups based on shared common characteristics using a mathematical “distance metric.”

The idea was to use machine learning to determine NFL Quarterback Archetype to agnostically determine which quarterbacks were truly “mobile” quarterbacks, and which were “pocket passers” that relied more on passing. I used a number of metrics in my actual clustering analysis, but they can be effectively summarized across two dimensions: passing and rushing, which can be further roughly summarized across two metrics: passer rating and rushing yards per year. Plotting the quarterbacks along these dimensions and plotting the groups chosen by the clustering methodology shows how cleanly the methodology selected the groups.

Read this blog article on the process for more information if you’re interested, or just check out this blog in general if you found this interesting!

Data: Collected from the ESPN API

2M rows of 1-min S&P bars (12 years of stock data) – 2008-2021

Intraday Stock Data (1 min) – S&P 500 – 2008-21: 12 years of 1 minute bars for data science / machine learning.

Granular stock bar data for research is difficult to find and expensive to buy. The author has compiled this library from a variety of sources and is making it available for free.

One compressed CSV file with 9 columns and 2.07 million rows worth of 1 minute SPY bars.  Access it here

A global database of COVID-19 vaccinations

Cumulative number of COVID-19 doses administered by country.
Cumulative number of COVID-19 doses administered by country.
COVID-19 vaccine doses administered per 100 people versus gross domestic product per capita.
COVID-19 vaccine doses administered per 100 people versus gross domestic product per capita.
Timeline of innovation in the development of vaccines.
Timeline of innovation in the development of vaccines.

Datasets: A live version of the vaccination dataset and documentation are available in a public GitHub repository here. These data can be downloaded in CSV and JSON formats. PDF.

 A list of available datasets for machine learning in manufacturing

Industrial ML Datasets: curated list of datasets, publicly available for machine learning researches in the area of manufacturing.

Predictive Maintenance and Condition Monitoring

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format
Diesel Engine Faults Features 2020 Signal 84 C (4) 3.500   Synthetic MAT Link

Process Monitoring

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format  
High Storage System Anomaly Detection 2018 Signal 20 C (2) 91.000   Synthetic CSV Link

Predictive Quality and Quality Inspection

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format
Casting Product Quality Inspection 2020 Image 300×300
512×512
C (2) 7.348 ✔️ Real JPG Link

Process Parameter Optimization

Name Year Feature Type Feature Count Instances Official Train/Test Split Data Source Format
Laser Welding 2020 Signal 13 361 Real XLS Link

Data Analytics Certification Questions and Answers Dumps

Datasets needed for Crop Disease Identification using image processing

Here is a collection of datasets with images of leaves

and more generic image datasets that include plant leaves

http://visualgenome.org/

http://image-net.org/

Plant Phenotyping

One hundreds plant species datasets

cvonline 

A Database of Leaf Images: Practice towards Plant Conservation with Plant Pathology

Survival Analysis datasets for machines

Survival Analysis datasets for machines

English alphabet organized by each letter’s note in ABC

English alphabet organised by each letter's note in ABC
English alphabet organized by each letter’s note in ABC

Discover datasets hosted in thousands of repositories across the Web using datasetsearch.research.google.com

#dataset #search   @Google

Create, maintain, and contribute to a long-living dataset that will update itself automatically across projects.

Datasets should behave like git repositories.

Image

Datasets should behave like git repositories
Datasets should behave like git repositories

Learn how to create, maintain, and contribute to a long-living dataset that will update itself automatically across projects, using git and DVC as versioning systems, and DAGsHub as a host for the datasets. 

Human Rights Measurement Initiative Datasets

Image

World Wide Energy Production by Source 1860 – 2019

[OC] World Wide Energy Production by Source 1860 – 2019 from dataisbeautiful

Data source: https://ourworldindata.org/energy

 Project Sunroof – Solar Electricity Generation Potential by Census Tract/Postal Code

 Courtesy of Google’s Project Sunroof: This dataset essentially describes the rooftop solar potential for different regions, based on Google’s analysis of Google Maps data to find rooftops where solar would work, and aggregate those into region-wide statistics.

It comes in a couple of aggregation flavors – by census tract , where the region name is the census tract id, and by postal code , where the name is the postal code. Each also contains latitude/longitude bounding boxes and averages, so that you can download based on that, and you should be able to do custom larger aggregations using those, if you’d like.

Carbon emission arithmetic + hard v. soft science

carbon emission arithmetic + hard v. soft science [oc] from dataisbeautiful

Data sources: Video From data-driven documentary The Fallen of World War II. Here and Here

Most popular Youtuber in every country 2021

What Does 1GB of Mobile Data Cost in Every Country?

What Does 1GB of Mobile Data Cost in Every Country?

Key Concepts of Data Science

A large dataset aimed at teaching AI to code, it consists of some 14M code samples and about 500M lines of code in more than 55 different programming languages, from modern ones like C++, Java, Python, and Go to legacy languages like COBOL, Pascal, and FORTRAN.

GitHub repo:

Download page

NSRDB: National Solar Radiation Database

 Download instructions are here

Cheat Sheet for Machine Learning, Data Science.

No photo description available.
Cheat Sheet for Machine Learning and Data Science

Emigrants from the UK by Destination

r/dataisbeautiful - [OC] Emigrants from the UK by Destination

Data source: Originally at the location marked on the Sankey Flow but is now here

Direct link to the spreadsheet used

US Rivers and Streams Dataset

Data source: https://hub.arcgis.com/

Data visualization

r/dataisbeautiful - [OC] US Rivers and Streams

Bubble Chart that compares the GDP of the G20 Countries

Data source: https://databank.worldbank.org/home.aspx

Desktop OS Market Share 2003 – 2021

[OC] Desktop OS Market Share 2003 – 2021 from dataisbeautiful

Data source: w3school

National Parks of North America

r/dataisbeautiful - [OC] National Parks of North America

Data Source: DataBayou

 NPS.gov, Open.canada.ca, and sig.conanp.gob.mx 

Inflation of Bitcoin and DogeCoin vs. Federal Reserve target

r/dataisbeautiful - [OC] Inflation of Bitcoin and DogeCoin vs. Federal Reserve target

Data source:

Percentage of women who experienced physical or sexual violence since the age of 15 in the EU

r/dataisbeautiful - Percentage of women who experienced physical or sexual violence since the age of 15 in the EU

Data Source from The Guardian: 

The whole report –  Questionnaire

Canadian Interprovincial Migration

Canadian Interprovincial Migration
Canadian Interprovincial Migration

Some context  here

Data  scraped from StatsCan

Covid-19 Vaccination Doses Administered per 100 in the G20

Data source: https://ourworldindata.org/covid-vaccinations

What does per 100 mean?

When the whole country is double vaccinated, the value will be 200 doses per 100 population. At the moment the UK is like 85, which is because ~70% of the population has had at least one dose and ~15% of the population (which is a subset of that 70%) have had two. Hence ~30% are currently unprotected – myself included until Sunday.

Import/Export of Conventional Arms by Different Countries over past 2 decades

DataSource: SIPRI Arms Transfer Database

Aggregated disease comparison dataset – Ensemble de données agrégées de comparaison des maladies

Data Source: Here and Here

According to the author of the source data: “For the 1918 Spanish Flu, the data was collected by knowing that the total counts were 500M cases and 50M deaths, and then taking a fraction of that per day based on the area of this graph image:” – the graph is used is here:

Visualización y conjunto de datos de comparación de enfermedades agregadas

Trending Google Searches by State Between 2018 and 2020 – Tendances des recherches Google par État

Data source: https://trends.google.com Trending topics from 2010 to 2019 were taken from Google’s annual Year in Search summary 2010-2029

The full, ~11 minute video covering the whole 2010s decade is available here at https://youtu.be/xm91jBeN4oo

Google Trends provides weekly relative search interest for every search term, along with the interest by state. Using these two datasets for each term, we’re able to calculate the relative search interest for every state for a particular week. Linear interpolation was used to calculate the daily search interest.

Market capitalization in billion dollars of Top 20 Cryptocurrencies in 2021-05-20 – crypto-monnaies

Data source: CoinMarket from end of 2013 until present

Capitalisation boursière en milliards de dollars des 20 principales crypto-monnaies en 2021-05-20

Top Chess Players From 2000-2020 – Meilleurs joueurs d’échecs – Лучшие шахматисты с 2000 по 2020 год

Data source: https://ratings.fide.com/

The y-axis is the world elo ratings (called FIDE ratings).

Comparing Emissions Sources – How to Shrink your Carbon Footprint More Effectively

r/dataisbeautiful - [OC] Comparing Emissions Sources - How to Shrink your Carbon Footprint More Effectively

 Data sources: Here

Source article: Here

Oil and gas-fired power plants in the world –

La dependencia de los combustibles fósiles – La dépendance aux énergies fossiles – 

r/dataisbeautiful - [OC] Oil and gas-fired power plants in the world

Data is from the Global Power Plant Database (World Resources Institute)

See map’s description here

Plantas de electricidad que funcionan con gas y petróleo

Top 100 Reddit posts of all time

r/dataisbeautiful - [OC] I recently made a graph showing the Top 100 Reddit posts of all time. Some people said I should make a pie chart too, so here it is!

Source: r/all on Reddit

Tool used: https://www.meta-chart.com

Fastest routes on land (and sometimes, boat) between all 990 pairs of European capitals

Las rutas más rápidas en tierra (y, a veces, en barco) entre los 990 pares de capitales europeas

Les itinéraires les plus rapides sur terre (et parfois en bateau) entre les 990 paires de capitales européennes

Source: Reddit

From the author: I started with data on roads from naturalearth.com, which also includes some ferry lines. I then calculated the fastest routes (assuming a speed of 90 km/h on roads, and 35 km/h on boat) between each pair of 45 European capitals. The animation visualizes these routes, with brighter lines for roads that are more frequently “traveled”.

In reality these are of course not the most traveled roads, since people don’t go from all capitals to all other capitals in equal measure. But I thought it would be fun to visualize all the possible connections.

The model is also very simple, and does not take into account varying speed limits, road conditions, congestion, border checks and so on. It is just for fun!

In order to keep the file size manageable, the animation only shows every tenth frame.

Is Russia, Turkey or country X really part of Europe? That of course depends on the definition, but it was more fun to include them than to exclude them! The Vatican is however not included since it would just be the same as the Rome routes. And, unfortunately, Nicosia on Cyprus is not included to due an error on my behalf. It should be!

Link to final still image in high resolution on my twitter

Pokemon Dataset

  1. Dataset of all 825 Pokemon (this includes Alolan Forms). It would be preferable if there are at least 100 images of each individual Pokemon.

https://github.com/veekun/pokedex: This is a Python library slash pile of data containing a whole lot of data scraped from Pokémon games. It’s the primary guts of veekun.

https://pokeapi.co/about

2) This dataset comprises of more than 800 pokemons belonging up to 8 generations.

Using this dataset have been fun for me. I used it to create a mosaic of pokemons taking image as reference. You can find it here and it’s free to use: Couple Mosaic (powered by Pokemons)

Here is the data type information in the file:

  • Name: Pokemon Name
  • Type: Type of Pokemon like Grass / Fire / Water etc..,.
  • HP: Hit Points
  • Attack: Attack Points
  • Defense: Defence Points
  • Sp. Atk: Special Attack Points
  • Sp. Def: Special Defence Points
  • Speed: Speed Points
  • Total: Total Points
  • url: Pokemon web-page
  • icon: Pokemon Image

Data File: Pokemon-Data.csv

30×30 m Worldwide High-Resolution Population and Demographics Data

ETL pipeline for Facebook’s research project to provide detailed large-scale demographics data. It’s broken down in roughly 30×30 m grid cells and provides info on groups by age and gender.

Population Density Overview

Data Source and API for access

Article about Dataset at Medium

Gridded global datasets for Gross Domestic Product and Human Development Index over 1990–2015

Rasterized GDP dataset – basically a heat map of global economic activity.

Gap-filled multiannual datasets in gridded form for Gross Domestic Product (GDP) and Human Development Index (HDI)

Data source here:

Decrease in worldwide infant mortality from 1950 to 2020

Post image

Data Sources: United Nations, CIA World Factbook, IndexMundi.

Data Collectors

Data Unblockers

Countries of the world sorted by those that have warmed the most in the last 10 years, showing temperatures from 1890 to 2020 

r/dataisbeautiful - Countries of the world sorted by those that have warmed the most in the last 10 years, showing temperatures from 1890 to 2020 [OC]

Data source: Gistemp temperature data

The GISS Surface Temperature Analysis ver. 4 (GISTEMP v4) is an estimate of global surface temperature change. Graphs and tables are updated around the middle of every month using current data files from NOAA GHCN v4 (meteorological stations) and ERSST v5 (ocean areas), combined as described in our publications Hansen et al. (2010) and Lenssen et al. (2019).

Climate change concern vs personal spend to reduce climate change

r/dataisbeautiful - [OC] Climate change concern vs personal spend to reduce climate change

Data Source: Competitive Enterprise Institute (PDF)

 Less than 20 firms produce over a third of all carbon emissions

The Illusion of Choice in Consumer Brands

The Illusion of Choice in Consumer Brands

Buying a chocolate bar? There are seemingly hundreds to choose from, but its just the illusion of choice. They pretty much all come from Mars, Nestlé, or Mondelēz (which owns Cadbury).

Source: Visual Capitalist

Yearly Software Sales on PlayStation Consoles since 1994

r/dataisbeautiful - [OC] Yearly Software Sales on PlayStation Consoles since 1994

Some context for these numbers :

  • PS4 holds the record for being the console to have sold the most games in video game history (> 1.622B units)
    • Previous record holder was PS2 at 1.537B games sold
  • PS4 holds the record for having sold the most games in a single year (> 300M units in FY20)
  • FY20 marks the biggest yearly software sales in PlayStation ecosystem with more than 338M units
  • Since PS5 release, Sony starts combining PS4/PS5 software sales
  • In FY12, Sony combined PS2/PS3 and PSP/VITA software sales
  • Sony stopped disclosing software sales in FY13/14

Yearly Hardware Sales of PlayStation Consoles since 1994

r/dataisbeautiful - [OC] Yearly Hardware Sales of PlayStation Consoles since 1994

Sony combined PS2/PS3 hardware sales in FY12 and combined PSP/VITA sales in FY12/13/14

Cybertruck vs F150 Lightning pre-orders, by time since debut

r/dataisbeautiful - [OC] Cybertruck vs F150 Lightning pre-orders, by time since debut

Source: Ford exec tweeting about preorder numbers this week

Top 100 Most Populous City Proper in the world

r/dataisbeautiful - (Fixed once again) Top 100 Most Populous City Proper in the world. [OC]

The City with 32 million is Chongqing, Shan is Shanghai, Beijin is Beijing, and Guangzho is Guangzhou

 

Tax data for different countries

Dataset is here

What do Europeans feel most attached to – their region, their country, or Europe?

r/dataisbeautiful - [OC] What do Europeans feel most attached to - their region, their country, or Europe?

Data source: Builds on data from the 2021 European Quality of Government Index. You can read more about the survey and download the data here

Cost of 1gb mobile data in every country

r/dataisbeautiful - Cost of 1gb mobile data in every country

r/dataisbeautiful - Cost of 1gb mobile data in every country

Dataset: Visual Capitalist

Frequency of all digrams in 18 languages, diacritics included 

r/dataisbeautiful - Langues germaniques

Dataset (according to author): Dictionaries are scattered on the internet and had to be borrowed from several sources: the Scrabble3d project, and Linux spellcheck dictionaries. The data can be found in the folder “Avec_diacritiques”.

Criteria for choosing a dictionary:
– No proper nouns
– “Official” source if available
– Inclusion of inflected forms
– Among two lists, the largest was fancied
– No or very rare abbreviations if possible- but hard to detect in unknown languages and across hundreds of thousands of words.

Mapped: The World’s Nuclear Reactor Landscape

r/dataisbeautiful - Mapped: The World’s Nuclear Reactor Landscape

Dataset: Visual Capitalist

Database of 999 chemicals based on liver-specific carcinogenicity

The author found this dataset in a more accessible format upon searching for the keyword “CDPB” (Carcinogenic Potency Database) in the National Library of Medicine Catalog. Check out this parent website for the data source and dataset description. The dataset referenced in OP’s post concerns liver specific carcinogens, which are marked by the “liv” keyword as described in the dataset description’s Tissue Codes section.

SMS Spam Collection Data Set

DownloadData FolderData Set Description

The SMS Spam Collection is a public set of SMS labeled messages that have been collected for mobile phone spam research

Open Datasets for Autonomous Driving

A2D2 DatasetApolloScape Dataset Argoverse Dataset Berkeley DeepDrive Dataset

CityScapes DatasetComma2k19 DatasetGoogle-Landmarks Dataset

KITTI Vision Benchmark SuiteLeddarTech PixSet Dataset Level 5 Open DatanuScenes Dataset

Oxford Radar RobotCar DatasetPandaSet Udacity Self Driving Car Dataset Waymo Open Dataset

Open Dataset people are looking for [Help if you can]

  1. Looking for Dataset on the outcomes of abstinence-only sex education.
  2. Looking for a dataset containing coronavirus self-test (if this is a thing globally) pictures for ML use
  3. Looking for Beam alignment 5G vehicular networks dataset
  4. Looking for tidy dataset for multivariate analysis (PCA, FA, canonical correlations, clustering)
  5. Indian all types of Fuel location datasets
  6. Curated social network datasets with summary statistics and background info
  7. Looking for textile crop disease datasets such as jute, flax, hemp
  8. Shopify App Store and Chrome Webstore Datasets
  9. Looking for dataset for university chatbot
  10. Collecting real life (dirty/ugly) datasets for data analysis
  11. In Need of Food Additive/Ingredient Definition Database
  12. Recent smart phone sensor Dataset – Android
  13. Cracked Mobile Screen Image Dataset for Detection
  14. Looking for Chiller fault data in a chiller plant
  15. Looking for dataset that contains the genetic sequences of native plasmids?
  16. Looking for a dataset containing fetus size measurements at various gestational ages.
  17. Looking for datasets about mental health since 2021
  18. GPS dataset of grocery stores
  19. What is the easiest way to bulk download all of the data from this epidemiology website? (~20,000 files)
  20. Looking for Dataset on Percentage of death by US state and Canadian province grouped by cause of death?
  21. Looking for Social engineering attack dataset in social media
  22. Steam Store Games (Clean dataset) by Nik Davis
  23. Dataset that lists all US major hospitals by county
  24. Another Data that list all US major hospitals by county
  25. Looking for open source data relating privacy behavior or related marketing sets about the trustworthiness of responders?
  26. Looking for a dataset that tracks median household income by country and year
  27. Dataset on the number of specific surgical procedures performed in the US (yearly)
  28. Looking for a dataset from reddit or twitter on top posts or tweets related to crypto currency
  29. Looking for Image and flora Dataset of All Known Plants, Trees and Shrubs
  30. US total fertility rates data one the state level
  31. Dataset of Net Worth of *World* Politicians
  32. Looking for water wells and borehole datasets
  33. Looking for Crop growth conditions dataset
  34. Dataset for translate machine JA-EG
  35. Looking for Electronic Health Record (EHR) record prices
  36. Looking for tax data for different countries
  37. Musicians Birthday Datasets and Associated groups
  38. Searching for dataset related to car dealerships [1]
  39. Looking for Credit Score Approval dataset
  40. Cyberbullying Dataset by demographics
  41. Datasets on financial trends for minors
  42. Data where I can find out about reading habits? [1, 2]
  43. Data sets for global technology adoption rates
  44. Looking for any and all cat / feline cancer datasets, for both detection and treatment
  45. ITSM dictionary/taxonomy datasets for topic modeling purposes
  46. Multistage Reliability Dataset
  47. Looking for dataset of ingredients for food[1]
  48. Looking for datasets with responses to psychological questionnaires[1,2,3]
  49. Data source for OEM automotive parts
  50. Looking for dataset about gene regulation
  51. Customer Segmentation Datasets (For LTV Models)
  52. Automobile dataset, years of ownership and repairs
  53. Historic Housing Prices Dataset for Individual Houses
  54. Looking for the data for all the tokens on the Uniswap graph
  55. Job applications emails datasets, either rejection, applications or interviews
  56. E-learning datasets for impact on e learning on school/university students
  57. Food delivery dataset (Uber Eats, Just Eat, …)
  58. Data Sets for NFL Quarterbacks since 1995
  59. Medicare Beneficiary Population Data
  60. Covid 19 infected Cancer Patients datasets
  61. Looking for  EV charging behavior dataset
  62. State park budget or expansionary spending dataset
  63.  Autonomous car driving deaths dataset
  64. FMCG Spending habits over the pandemic
  65. Looking for a Question Type Classification dataset
  66. 20 years of Manufacturer/Retail price of Men’s footwear
  67. Dataset of Global Technology Adoption Rates
  68. Looking For Real Meeting Transcripts Dataset
  69. Dataset For A Large Archive Of Lyrics  [1,2,3]
  70. Audio dataset with swearing words
  71. A global, georeferenced event dataset on electoral violence with lethal outcomes from 1989 to 2017. [1,]
  72. Looking for Jaundice Dataset for ML model
  73. Looking for social engineering attack detection dataset?
  74. Wound image datasets to train ML model [1]
  75. Seeking for resume and job post dataset
  76. Labelled dataset (sets of images or videos) of human emotions [1,2]
  77. Dataset of specialized phone call transcripts
  78. Looking for Emergency Response Plan Dataset for family Homes, condo buildings and Companies
  79. Looking for Birthday wishes datasets
  80. Desperately in need of national data for real estate [1,2,]
  81. NFL playoffs games stadium attendance dataset
  82. Datasets with original publication dates of novels [1,2]
  83. Annotated Documents with Images Data Dump
  84. Looking for  dataset for “Face Presentation Attack Detection”
  85. Electric vehicle range & performance dataset [1, 2]
  86. Dataset or API with valid postal codes for US, Mexico, and Canada with country, state/province, and city/town [1, 2, 3, 4, 5, 6]
  87. Looking for Data sources regarding Online courses dropout rate, preferably by countries [1,2 ]
  88. Are there dataset for language learning [1, 2]
  89. Corporate Real Estate Data [1,2, 3]
  90. Looking for simple clinical trials datasets [1, 2]
  91. CO2 Emissions By Aircraft (or Aircraft Type) – Climate Analysis Dataset [1,2, 3, 4]
  92. Player Session/playtime dataset from games [1,2]
  93. Data sets that support Data Science (Technology, AI etc) being beneficial to sustainability [1,2]
  94. Datasets of a grocery store [1,2]
  95. Looking for mri breast cancer annotation datasets [1,2]
  96. Looking for free exportable data sets of companies by industry [1,2]
  97. Datasets on Coffee Production/Consumption [1,2]
  98. Video gaming industry datasets – release year, genre, games, titles, global data  [1,2]
  99. Looking for mobile speaker recognition dataset [1,2]
  100. Public DMV vehicle registration data [1,2]
  101. Looking for historical news articles based on industry sector [1,2]
  102. Looking for Historical state wide Divorce dataset [1,2]
  103. Public Big Datasets – with In-Database Analytics [1,2]
  104. Dataset for detecting Apple products (object detection) [1,2]

Cars for sale in Germany from 2011 to 2021

Dataset obtained scraping AutoScout. In the file, you will find features describing 46405 vehicles: mileage, make, model, fuel, gear, offer type, price, horse power, registration year.

Dataset scraped from AutoScout24 with information about new and used cars.

 

Percentage of female students in higher education by subject area

r/dataisbeautiful - [OC] Percentage of female students in higher education by subject area

The data was obtained from the UK government website here , so unfortunately there are some things I’m unaware of regarding data and methodology.

All the passes: A visualization of ~1 million passes from 890 matches played in major football/soccer leagues/cups

  •  Champion League 1999
  • FA Women’s Super League 2018
  • FIFA World Cup 2018, La Liga 2004 – 2020
  • NWSL 2018
  • Premier League 2003 – 2004
  • Women’s World Cup 2019

1million+ football/soccer passes visualization

Data Source: StatsBomb

Global “Urbanity” Dataset (using population mosaics, nighttime lights, & road networks

In this project, the authors  have designed a spatial model which is able to classify urbanity levels globally and with high granularity. As the target geographic support for our model we selected the quadkey grid in level 15, which has cells of approximately 1x1km at the equator.

Dataset:  Here 

Percentage of students with disabilities in higher education by subject area

r/dataisbeautiful - [OC] Percentage of students with disabilities in higher education by subject area

The author obtained the data from the UK Government website, so unfortunately don’t know the methodology or how they collected the data etc. 

The comparison to the general public is  a great idea – according to the Government site, 6% of children, 16% of working-age adults and 45% of Pension-age adults are disabled.

Dataset: here

Arrests for Hate Crimes in NYC by Category, 2017-2020

r/dataisbeautiful - [OC] Arrests for Hate Crimes in NYC by Category, 2017-2020

The Most Successful U.S. Sports Franchises

r/dataisbeautiful - [OC] The Most Successful U.S. Sports Franchises

Data source: https://www.sports-reference.com/

Adult cognitive skills (PIAAC literacy and numeracy) by Percentile and by country

According to the author (https://www.reddit.com/user/newpua_bie/) , this animation depicts adult cognitive skills, as measured by the PIAAC study by OECD. Here, the numeracy and literacy skills have been combined into one. Each frame of the animation shows the xth percentile skill level of each individual country. Thus, you can see which countries have the highest and lowest scores among their bottom performers, median performers, and top performers. So for example, you can see that when the bottom 1st percentile of each country is ranked, Japan is at the top, Russia is second, etc. Looking at the 50th percentile (median) of each country, Japan is top, then Finland, etc.

 Programme for the International Assessment of Adult Competencies (PIAAC) is a study by OECD to measure measured literacy, numeracy, and “problem-solving in technology-rich environments” skills for people ages 16 and up. For those of you who are familiar with the school-age children PISA study, this is essentially an adult version of it.

Dataset: PIAAC 

G7 Corporate Tax rate 1980 – 2020

r/dataisbeautiful - G7 Corporate Tax rate 1980 - 2020 [OC]

Dataset: Tax Foundation

 Euro 2020 (played in 2021) Group Stage Predictions Based of a Bayesian Linear Item Response Model

r/dataisbeautiful - [OC] Euro 2020 (played in 2021) Group Stage Predictions Based of a Bayesian Linear Item Response Model

Data Source: UEFA qualifying match data

The model was built in Stan and was inspired by Andrew Gelman’s World Cup model shown here. These plots show posterior probabilities that the team on the y axis will score more goals than the team on the x axis. There is some redundancy of information here (because if I know P(England beats Scotland) then I know P(Scotland beats England) )

Data

Source: Italian National Institute of Statistics (Istituto Nazionale di Statistica)

The 15 most shared musicians on Reddit

r/dataisbeautiful - [OC] The 15 most shared musicians on Reddit

Data source: The authors made a dataset of YouTube and Spotify shares on Reddit. More info available here

Spam vs. Legitimate Email, Average Global Emails per Day

r/dataisbeautiful - Spam vs. Legitimate Email, Average Global Emails per Day [OC]

Data Source: Here. The author  computed the average per day over the June 3 – June 9, 2021 period.

spam vs legitimate email 2021

Falling Fertility, 1800–2016

Data source: Here (go to the “Babies per woman,” “Income,” and “Population” links on that page).

Europe Covid-19 waves

r/dataisbeautiful - Europe Covid-19 waves [OC]

Data Source: Here

Who is going to win EURO 2020? Predicted probabilities pooled together from 18 sources

r/dataisbeautiful - Who is going to win EURO 2020? Predicted probabilities pooled together from 18 sources [OC]

Data source: Here

Population Density of Canada 2020

r/dataisbeautiful - [OC] Population Density of Canada 2020

DataSet:  Gathered from https://www.worldpop.org/project/

The greater the length of each spike correlates to greater population density.

The portion of a country’s population that is fully vaccinated for COVID (as of June 2021) scales with GDP per capita.

r/dataisbeautiful - [OC] The portion of a country's population that is fully vaccinated for COVID (as of June 2021) scales with GDP per capita.

Dataset of Chemical reaction equations

1-  https://chemequations.com/en/

2- Kaggle chemistry section 

3- Reaction datasets 

4- Chemistry datasets

5- BiomedCentral 

Maths datasets

1111 2222 3333 Equation Learning 

Datasets for Stata Structural Equation Modeling

Mathematics Dataset

SQL Queries Dataset 

SEDE (Stack Exchange Data Explorer) is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction. These pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. Access it here

Countries of the world, ranked by population, with the 100 largest cities in the world marked

According to the author:

Each map size is proportional to population, so China takes up about 18-19% of the map space.

Countries with very far-flung territories, such as France (or the USA) will have their maps shrunk to fit all territories. So it is the size of the map rectangle that is proportional to population, not the colored area. Made in R, using data from naturalearthdata.com. Maps drawn with the tmap package, and placed in the image with the gridExtra package. Map colors from the wesanderson package.

Data source: The Economist

What businesses in different countries search for when they look for a marketing agency – “creative” or “SEO”?

r/dataisbeautiful - What businesses in different countries search for when they look for a marketing agency - "creative" or "SEO"? [OC]

Data source: Google Trends

More maps, charts and written analysis on this topic here

Is the economic gap between new and old EU countries closing?

Post image

Data source:  Eurostat

Interactive version so you can click on those circles here

Reddit r/wallstreetbets posts and comments in real-time

  • Posts

  • Comments

  • Beneath adds some useful features for shared data, like the ability to run SQL queries, sync changes in real-time, a Python integration, and monitoring. The monitoring is really useful as it lets you check out the write activity of the scraper (no surprise, WSB is most active when markets are open
  • The scraper (which uses Async PRAW) is open source here

Global NO2 pollution data visualization June 2021

Data Source: SILAM

Shopify App Store Report: 2021

Data source: Marketplace Apps

The Chrome Webstore Report: 2021

Data source: Marketplace Apps

Percentage of Adults with HIV/AIDS in Africa

r/dataisbeautiful - [OC] Percentage of Adults with HIV/AIDS in Africa

Dataset:  All the countries through the UN AIDS organization 

Recorded CDC deaths (2014 – June 16, 2021) from Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)

r/dataisbeautiful - [OC] Recorded CDC deaths (2014 - June 16, 2021) from Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)

Data Source: combined CDC weekly death counts 2014 – 2019 and CDC weekly death counts 2020-2021

What are the long term gains on cryptocurrencies?

r/dataisbeautiful - What are the long term gains on cryptocurrencies? [OC]

Data Sources: investing.com and coingecko.com

The chart shows the average daily gain in $ if $100 were invested at a date on x-axis. Total gain was divided by the number of days between the day of investing and June 13, 2021. Gains were calculated on average 30-day prices.

Time range: from March 28, 2013, till June 13, 2021

Life Expectancy and Death Probability by Age and Gender

r/dataisbeautiful - [OC] Life Expectancy and Death Probability by Age and Gender

Data source: Here

Daily Coronavirus cases in Canada vs % of Population Vaccinated

r/dataisbeautiful - Daily Coronavirus cases in Canada vs % of Population Vaccinated [OC]

Data Source: Cases Vaccines

Google Playstore Apps with 2.3million app data on Kaggle

Google Playstore dataset is now available with double the data (2.3 Million) android application data and a new attribute stating the scraped date time in Kaggle.

Dataset: Get it here or here

African languages dataset

We have 3000 tribes or more in Africa and in that 3000 we have sub tribes.

1 Introduction to African Languages (Harvard)

2- Languages of the world at Ethnologue

3- Britannica: Nilo-Saharan Laguages

4- Britannica: Khoisan Languages

Daily Temperature of Major Cities Dataset

Daily average temperature values recorded in major cities of the world.

 The dataset is available as separate txt files for each city here. The data is available for research and non-commercial purposes only

 Do stricter gun laws reduce firearms homicides?

r/dataisbeautiful - [OC] Do stricter gun laws reduce firearms homicides?

Data Sources: Guns to CarryEFSGVCDC

According to the author: Looking at non-suicide firearms deaths by state (2019), and then grouping by the Guns to Carry rating (1-5 stars), it seems that stricter gun laws are correlated with fewer firearms homicides. Guns to Carry rates states based on “Gun friendliness” with 1 star being least friendly (California, for example), and 5 stars being most friendly (Wyoming, for example). The ratings aren’t perfect but they include considerations like: Permit required, Registration, Open carry, and Background checks to come up with a rating.

The numbers at the bottom are the average non-suicide deaths calculated within the rating group. Each bar shows the number for the individual state.

Interesting that DC is through the roof despite having strict laws. On the flip side, Maine is very friendly towards gun owners and has a very low homicide rate, despite having the highest ratio of suicides to homicides.

Obviously, lots of things to consider and this is merely a correlation at a basic level. This is a topic that interested me so I figured I’d share my findings. Not attempting to make a policy statement or anything.

Relative frequency of words in economics textbooks vs their frequency in mainstream English (the Google Books corpus)

r/dataisbeautiful - [OC] Relative frequency of words in economics textbooks vs their frequency in mainstream English (the Google Books corpus)

Author

Data Source: Data for word frequency in the Google corpus is from the 2019 Ngram dataset. For details about how to work with this data, see Working With Google Ngrams: A Data-Wrangling Tale.

Data for word frequency in econ textbooks was compiled by myself by scraping words from 43 undergraduate economics textbooks. For details see Deconstructing Econospeak.

Hours per day spent on mobile devices by US adults

r/dataisbeautiful - [OC] Hours per day spent on mobile devices by US adults

Author: nava_7777

Data Source: from eMarketer, as quoted byJon Erlichman

Purpose according to the author: raw textual numbers (like in the original tweet) are hard to compare, particularly the acceleration or deceleration of a trend. Did for myself, but maybe is useful to somebody.

Environmental Impact of Coffee Brewing Methods

r/dataisbeautiful - [OC] Environmental Impact of Coffee Brewing Methods

Author: Coffee_Medley

Data Source: 1 2 3

More according to the author:

  • Measurements and calculations of NG and Electricity used to heat four cups of distilled water by Coffee Medley (6/14/2021)

  • Average coffee bag and pod weight by Coffee Medley (6/14/2021)

Murders in major U.S. Cities: 2019 vs. 2020

r/dataisbeautiful - [OC] Murders in major U.S. Cities: 2019 vs. 2020

Author: datacanbeuseful

Data source: NPR

New Harvard Data (Accidentally) Reveal How Lockdowns Crushed the Working Class While Leaving Elites Unscathed

Data source: Harvard

Support for same-sex marriage by religious group

r/dataisbeautiful - Support for same-sex marriage by religious group [OC]

Data source: PEW

More: Summary of religiously (un)affiliated people’s views on homosexuality, broken down into 18 countries

Daily chance of dying for Americans

r/dataisbeautiful - Daily chance of dying for Americans [OC]

Author: NortherSugarLoaf

Data source: SSA Actuarial Data

Processing: Yearly probability of death is converted to the daily probability and expressed in micromorts. Plotted versus age in years.

Micromort:

According to the author,

A few things to notice: It’s dangerous to be a newborn. The same mortality rates are reached again only in the fifties. However, mortality drops after birth very quickly and the safest age is about ten years old. After experiencing mortality jump in puberty – especially high for boys, mortality increases mostly exponentially with age. Every thirty years of life increase chances of dying about ten times. At 80, chance of dying in a year is about 5.8% for males and 4.3% for females. This mortality difference holds for all ages. The largest disparity is at about twenty three years old when males die at a rate about 2.7 times higher than females.

This data is from before COVID.

Here is the same graph but in linear Y axis scale

Here is the male to female mortality ratio

Mapping Global Carbon Emission Intensity (Dec 2020)

r/dataisbeautiful - [OC] Mapping Global Carbon Emission Intensity (Dec 2020)

Data Source: Copernicus Atmosphere Monitoring Service (CAMS)

Religions with the most Adherents from 1945 – 2010

This image has an empty alt attribute; its file name is image.png

Data source: Zeev Maoz and Errol A. Henderson. 2013. “The World Religion Dataset, 1945-2010: Logic, Estimates, and Trends.” International Interactions, 39: 265-291.

IPO Returns 2000-2020

IPO Returns 2000-2020

IPO Returns 2000-2020

IPO Returns 2000-2020

Data from: iposcoop.com
From the author u/nobjos: The full article on the above analysis can be found here
I have sub r/market_sentiment where I do a comprehensive deep-dive on one investment strategy/topic every week! Some of the author popular articles are
a. Performance of Jim Cramer’s stock picks
b. Performance of buy and sell recommendations made by financial analysts in the last decade
c. Benchmarking performance of Motely fool against SP500
Funko IPO is considered to have the worst first-day return for an IPO in the last two decades.
Out of the top 10 list, only 3 Investment banks had below-average returns.
On average, IPOs did make money for the investor. But the amount is significantly different if you got allocated the IPO at offer price vs you get the IPO at market price.
Baidu.com made a whopping 354% on its listing day. Another interesting observation is 6 out of 10 companies in the list were listed in 200 (just before the dot com crash)

Total number of streams per artist vs. number of Top 200 hits (Spotify Top 200 since 2017)

r/dataisbeautiful - [OC] Total number of streams per artist vs. number of Top 200 hits (Spotify Top 200 since 2017)

Author: blairfix

Data is from the Spotify Top 200 and covers the period from Jan. 1, 2017 to Jun. 9, 2021. You can download my dataset here.

For every artist that appears in the Top 200, I add up their total streams (for all songs) and the total number of songs in the dataset.

For a commentary on the data, see The Half Life of a Spotify Hit.

Number of Miss Americas by U.S. State

r/dataisbeautiful - [OC] Number of Miss Americas by U.S. State

Data Source: Wikipedia

The World’s Nuclear Warheads

r/dataisbeautiful - [OC] The World's Nuclear Warheads

Author: academiadvice

Data Source: Federation of American Scientists – https://fas.org/issues/nuclear-weapons/status-world-nuclear-forces/

Tools: Excel, Datawrapper, https://coolors.co/

Check out the FAS site for notes and caveats about their estimates. Governments don’t just print this stuff on their websites. These are evidence-based estimates of tightly-guarded national secrets.

Of particular note – Here’s what the FAS says about North Korea: “After six nuclear tests, including two of 10-20 kilotons and one of more than 150 kilotons, we estimate that North Korea might have produced sufficient fissile material for roughly 40-50 warheads. The number of assembled warheads is unknown, but lower. While we estimate North Korea might have a small number of assembled warheads for medium-range missiles, we have not yet seen evidence that it has developed a functioning warhead that can be delivered at ICBM range.”

The population of Las Vegas over time

r/dataisbeautiful - [OC] The population of Las Vegas over time

Data Source: Wikipedia

 The Alpha to Omega of Wikipedia

r/dataisbeautiful - [OC] The Alpha to Omega of Wikipedia

Author: feldesque

Data Source: The wikipediatrend package in R

Code published here

Glacial Inter-glacial cycles over the past 450000 years

Source:  https://geology.utah.gov/

Global temperature change from 1850-2020

r/dataisbeautiful - Global temperature change from 1850-2020

Worth noting these are largely driven by changes in the amount of solar radiation reaching us due to variations in earth’s orbit

Top Companies Contributing to Open Source – 2011/2021

Source and links

The author used several sources for this video and article. The first, for the video, is GitHub Archive & CodersRank. For the analysis of the OSCI index data, the author used https://opensourceindex.io/.

Crime Rates in the US: 1960-2021

r/dataisbeautiful - [OC] Crime Rates in the US: 1960-2021

Data source: Here

Here

2021 is straight projections, must be taken with a grain of salt. However, the assumption of continuous rise of murder rate is not a bad one based on recent news reports, such as: here

In a property crime, a victim’s property is stolen or destroyed, without the use or threat of force against the victim. Property crimes include burglary and theft as well as vandalism and arson.

A network visualization of privacy research (83k nodes, 462k edges)

r/dataisbeautiful - [OC] A network visualisation of privacy research (83k nodes, 462k edges)

Author: FvDijk

This image was generated for my research mapping the privacy research field. The visual is a combination of network visualisation and manual adding of the labels.

The data was gathered from Scopus, a high-quality academic publication database, and the visualisation was created with Gephi. The initial dataset held ~120k publications and over 3 million references, from which we selected only the papers and references in the field.

The labels were assigned by manually identifying clusters and two independent raters assigning names from a random sample of publications, with a 94% match between raters.

The scripts used are available on Github

The full paper can be found on the author’s website:

 

GDP (at purchasing power parity) per capita in international dollars

r/dataisbeautiful - [OC] GDP (at purchasing power parity) per capita in international dollars

Author:  Simaniac

Data source: IMF

Phone Call Anxiety dataset for Millennials and Gen Z

r/dataisbeautiful - Phone Call Anxiety is a real thing for Millennials and Gen Z [OC]

Author: /u/CynicalScyntist

This is a randomized experiment the author  conducted with 450 people on Amazon MTurk. Each person was randomly assigned to one of three writing activities in which they either (a) described their phone, (b) described what they’d do if they received a call from someone they know, or (c) describe what they’d do if they received a call from an unknown number. Pictures of an iPhone with a corresponding call screen were displayed above the text box (blank, “Incoming Call,” or “Unknown”). Participants then rated their anxiety on a 1-4 scale.

Dataset: Here

Source Article

Hate Crime Statistics in New York State 2019-2021

Hate Crime Statistics NYC 2019-2021

Continue reading “Data Sciences – Top 300 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes”

Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

Azure Administrator AZ-104 Exam Questions and Answers Dumps

Data Center Proxies - Data Collectors - Data Unblockers

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761

The Google Certified Cloud Professional Architect Exam assesses your ability to:

  • Design and plan a cloud solution architecture
  • Manage and provision the cloud solution infrastructure
  • Design for security and compliance
  • Analyze and optimize technical and business processes
  • Manage implementations of cloud architecture
  • Ensure solution and operations reliability
  • Designing and planning a cloud solution architecture

The Google Certified Cloud Professional Architect covers the following topics:

Data Center Proxies - Data Collectors - Data Unblockers

Designing and planning a cloud solution architecture: 36%

This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.

Managing and provisioning a solution Infrastructure: 20%

This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.

Designing for security and compliance: 12%

This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.

Managing implementation: 10%

This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).

Ensuring solution and operations reliability: 6%

This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.

Analyzing and optimizing technical and business processes: 16%

This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.


Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps: You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C

Question 1:  Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?

 A. Have the vehicles in the field stream the data directly into BigQuery.

B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.

C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.

D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.

ANSWER1:

D

Notes/References1:

D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.

Reference: Streaming insertsApache Hadoop and Spark10 tips for building long running cluster using cloud dataproc




Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?

A. Execute queries against data stored in a Cloud SQL.

B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.

C. Execute queries against data stored on daily partitioned BigQuery tables.

D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.

ANSWER2:

B

Notes/References2:

B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.

Reference: BigTables time series clusterBigQuery

Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?

A. Use multiple connectivity subsystems for redundancy. 

B. Require IPv6 for connectivity to ensure a secure address space. 

C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.

D. Use a functional programming language to isolate code execution cycles.

E. Treat every microservice call between modules on the vehicle as untrusted.

F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.

ANSWER3:

E and F

Notes/References3:

E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.

F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.

Reference 3: Trusted Platform Module

Question 4: For this question, refer to the Company A case study.

Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

A. OpEx/CapEx allocation, LAN change management, capacity planning

B. Capacity planning, TCO calculations, OpEx/CapEx allocation 

C. Capacity planning, utilization measurement, data center expansion

D. Data center expansion, TCO calculations, utilization measurement

ANSWER4:

B

Notes/References4:

B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.

Reference: Cloud Economics




Question 5: For this question, refer to the Company A case study.

You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?

A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

ANSWER5:

C

Notes/References5:

C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.

Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?

A. RPM/DEB

B. Containers 

C. Chef/Puppet

D. Virtual machines

ANSWER6:

B

Notes/References6:

B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.

Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?

A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table. 

C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.

ANSWER7:

C

Notes/References7:

C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.




Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?

A. Load logs into BigQuery. 

B. Load logs into Cloud SQL.

C. Import logs into Stackdriver. 

D. Insert logs into Cloud Bigtable.

E. Upload log files into Cloud Storage.

ANSWER8:

A and E

Notes/References8:

A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.

E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.

References: BigQueryStackDriverBigTableStorage Class: ColdLine




Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. 

B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.

C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

ANSWER9:

C

Notes/References9:

C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.

Reference: Load balancingLoad Balancing Health Checking

Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

A. Add each tier to a different subnetwork.

B. Set up software-based firewalls on individual VMs. 

C. Add tags to each tier and set up routes to allow the desired traffic flow.

D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

ANSWER10:

D

Notes/References10:

D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.

Reference: Using VPCRoutesAdd Remove Network




Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?

A. Use gsutil.

B. Use gcloud.

C. Use GCS REST API. 

D. Use Storage Transfer Service.

ANSWER11:

A

Notes/References11:

A is correct because gsutil gives you access to write data to Cloud Storage.

Reference: gsutilsgcloud sdkcloud storage json apiuploading objectsstorage transfer

Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?

A. Encrypt the message client-side using block-based encryption with a shared key.

B. Tag messages client-side with the originating user identifier and the destination user.

C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server. 

D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.

ANSWER12:

D

Notes/References12:

D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.

Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?

A. In the source code

B. In an environment variable 

C. In a key management system

D. In a config file that has restricted access through ACLs

ANSWER13:

C

Notes/References13:




Question 14: For this question, refer to the Company B case study.

Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL

B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery 

C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

ANSWER14:

B

Notes/References14:

B is correct because:
Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers.
Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices.
Cloud Pub/Sub can ingest the streaming data from the mobile users.
BigQuery can query more than 10 TB of historical data.

References: GCP QuotasBeam Apache WindowingBeam Apache TriggersBigQuery External Data SolutionsApache Hive on Cloud Dataproc

Question 15: For this question, refer to the Company B case study.

Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.

ANSWER15:

A

Notes/References15:

A is correct because simulating production load in GCP can scale in an economical way.

Reference: Load Testing iot using gcp and locustDistributed Load Testing Using Kubernetes

Question 16: For this question, refer to the Company B case study.

Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:

  • Services are deployed redundantly across multiple regions in the US and Europe
  • Only frontend services are exposed on the public internet.
  • They can reserve a single frontend IP for their fleet of services.
  • Deployment artifacts are immutable

Which set of products should they use?

A. Cloud Storage, Cloud Dataflow, Compute Engine

B. Cloud Storage, App Engine, Cloud Load Balancing

C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing

D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager

ANSWER16:

C

Notes/References16:

C is correct because:
Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers.
Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose.
Container Registry is a single place for a team to manage Docker images for the services.

References: Load Balancing https – load balancing overview GCP lb global forwarding rulesreserve static external ip addressbest practice for operating containerscontainer registrydataflowcalling https

Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

A. Org viewer, Project owner

B. Org viewer, Project viewer 

C. Org admin, Project browser

D. Project owner, Network admin

ANSWER17:

B

Notes/References17:

B is correct because:
Org viewer grants the security team permissions to view the organization's display name.
Project viewer grants the security team permissions to see the resources within projects.

Reference: GCP Resource Manager – User Roles

Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?

A. Use persistent disks to store the state. Start and stop the VM as needed. 

B. Use the –auto-delete flag on all persistent disks before stopping the VM. 

C. Apply VM CPU utilization label and include it in the BigQuery billing export.

D. Use BigQuery billing export and labels to relate cost to groups. 

E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.

ANSWER18:

A and D

Notes/References18:

A is correct because persistent disks will not be deleted when an instance is stopped.

D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.

References: GCP instances life cycleGCP instances set disk auto deleteGCP Local Data PersistanceGCP export data BigQueryGCP Creating Managing Labels




Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?

A. Configure a new load balancer for the new version of the API.

B. Reconfigure old clients to use a new endpoint for the new API. 

C. Have the old API forward traffic to the new API based on the path.

D. Use separate backend services for each API path behind the load balancer.

ANSWER19:

D

Notes/References19:

D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.

References: load balancing httpsload balancing backendGCP lb global forwarding rules

Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?

A. Increase the virtual machine’s memory to 64 GB.

B. Create a new virtual machine running PostgreSQL. 

C. Dynamically resize the SSD persistent disk to 500 GB.

D. Migrate their performance metrics warehouse to BigQuery.

ANSWER20:

C

Notes/References20:

C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.

References: GCP compute disks pdsspecsGCP Compute Disks Performances




Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Google’s Cloud CDN.

B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

C. Do nothing.

D. Use global BigTable storage.

E. Use a global Cloud Spanner instance.

F. Migrate the data to a new multi-regional GCS bucket.

G. Change the storage class to multi-regional.

ANSWER21:

A

Notes/References21:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?

A. Python, Flask, App Engine Standard

B. Ruby, Nginx, GKE

C. HTML, CSS, Cloud Storage

D. Node.js, Express, Cloud Functions

E. Rust, Rocket, App Engine Flex

F. Perl, CGI, GCE

ANSWER22:

A

Notes/References22:

The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too. 

Reference: Building a Python 3.7 App on App Engine

Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?

A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.

C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER23:

C

Notes/References23:

Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

References: Cloud Solutions Architecture Reference


Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?

A. Pause all Cloud Functions via the UI and unpause them when work starts back up.

B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.

C. Delete all Cloud Functions and recreate them when work starts back up.

D. Convert all Cloud Functions to run as App Engine Standard applications during the break.

E. None of these options is a good suggestion.

ANSWER24:

E

Notes/References24:

Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.

Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Cloud Filestore

D. Persistent Disk

ANSWER25:

C

Notes/References25:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question. 

Reference: Cloud Filestore | Google Cloud

Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?

A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER26:

A

Notes/References26:

The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn't also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

Reference: Cloud Solutions Architecture Reference


Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?

A. Cloud Storage

B. Logstash

C. Cloud Monitoring

D. Cloud Logging

E. BigQuery

F. BigTable

ANSWER27:

D

Notes/References27:

Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on. 

Reference: Cloud Logging | Google Cloud

Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?

A. Compute Engine

B. Cloud Filestore

C. Cloud Storage

D. Persistent Disk

E. Container Registry

F. Cloud Source Repositories

G. Cloud Build

H. Nearline

ANSWER28:

C

Notes/References28:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer. 

Reference: The App Engine Standard Environment Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?

A. Use a global Cloud Spanner instance.

B. Change the storage class to multi-regional.

C. Use Google’s Cloud CDN.

D. Migrate the data to a new regional GCS bucket.

E. Do nothing.

F. Use global BigTable storage.

ANSWER29:

E

Notes/References29:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket. 

Reference: Google Cloud Storage : What bucket class for the best performance?


Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

B. Use Google’s Cloud CDN.

C. Use global BigTable storage.

D. Do nothing.

E. Migrate the data to a new multi-regional GCS bucket.

F. Use a global Cloud Spanner instance.

ANSWER30:

E

Notes/References30:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?

A. Six

B. One

C. Three

D. Four

E. Two

F. Nine

ANSWER31:

A

Notes/References31:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions


Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Persistent Disk

D. Nearline

E. Cloud Source Repositories

F. Cloud Build

G. Cloud Filestore

H. Compute Engine

ANSWER32:

F

Notes/References32:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?

A. Two

B. One

C. Three

D. Nine

E. Four

F. Six

ANSWER33:

D

Notes/References33:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?

A. Cloud Filestore

B. Coldline

C. Nearline

D. Persistent Disk

E. Cloud Source Repositories

F. Cloud Storage

G. Container Registry

ANSWER34:

B

Notes/References34:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline. 

Reference: Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud


Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Filestore

C. Cloud Source Repositories

D. Persistent Disk

E. Cloud Storage

F. Code Build

G. Nearline

ANSWER35:

A

Notes/References35:

Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?

A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.

B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.

C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.

D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.

E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.

ANSWER36:

C

Notes/References36:

You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor's own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you'll need to grant securityAdmin, not networkViewer. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions Understanding roles | Cloud IAM Documentation | Google Cloud

Question 37: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?

A. One

B. Six

C. Four

D. Three

E. Nine

ANSWER37:

F

Notes/References37:

A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions


Question 38: You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?

A. Binary Authorization

B. Cloud IAM

C. Security Key Enforcement

D. Cloud SCC

E. Cloud KMS

ANSWER38:

A

Notes/References38:

Cloud KMS is Google's product for managing encryption keys. Security Key Enforcement is about making sure that people's accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments. 

Reference: Cloud Key Management Service | Google Cloud Cloud IAM | Google Cloud Cloud Data Loss Prevention | Google Cloud Security Command Center | Google Cloud Binary Authorization | Google Cloud Security Key Enforcement – 2FA

Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?

A. PCI

B. PII

C. SOX

D. GDPR

E. HIPAA

ANSWER39:

B and D

Notes/References39:

There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”. 

Reference: Sarbanes–Oxley Act – Wikipedia Payment Card Industry Data Security Standard – Wikipedia Personal data – Wikipedia Personal data – Wikipedia

Question 40: Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?

A. Their customers located in the EU may require them to delete their user data and provide evidence of such.

B. They will also need to pass a SOX audit.

C. They handle money-linked information.

D. Their system deals with medical information.

ANSWER40:

D

Notes/References40:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 41: Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?

A. They handle money-linked information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. Their system deals with medical information.

D. They will also need to pass a SOX audit.

ANSWER42:

A

Notes/References42:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much. 

Reference: Cloud Compliance & Regulations Resources | Google Cloud


Question 43: Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?

A. Their system deals with medical information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. They will also need to pass a SOX audit.

D. They handle money-linked information.

ANSWER43:

B

Notes/References43:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals' (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 44: For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?

A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud

B. Lift and shift all servers at one time

C. Lift and shift one application at a time

D. Lift and shift one server at a time

E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system

F. Enact their disaster recovery plan and fail over

ANSWER44:

F

Notes/References44:

The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time. 

Reference: Strangler pattern – Cloud Design Patterns | Microsoft Docs StranglerFigApplication Monolith to Microservices Using the Strangler Pattern – DZone Microservices Understanding Lift and Shift and If It’s Right For You

Question 45: Which of the following commands is most likely to appear in an environment setup script?

A. gsutil mb -l asia gs://${project_id}-logs

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute instances create –zone–machine-type=f1-micro newvm

D. gcloud compute ssh ${instance_id}

E. gsutil cp -r gs://${project_id}-setup ./install

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

ANSWER45:

A

Notes/References45:

The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You're not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances. 

Reference: mb – Make buckets | Cloud Storage | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 46: Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?

A. /dealerLocations/get

B. /dealerLocations

C. /dealerLocations/list

D. Source and destination

E. /getDealerLocations

ANSWER46:

B

Notes/References46:

It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it's important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint. 

Reference: RESTful API Design — Step By Step Guide – By


Question 47: Which of the following commands is most likely to appear in an instance shutdown script?

A. gsutil cp -r gs://${project_id}-setup ./install

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute ssh ${instance_id}

D. gsutil mb -l asia gs://${project_id}-logs

E. gcloud compute instances delete ${instance_id}

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

G. gcloud compute instances create –zone–machine-type=f1-micro newvm

ANSWER47:

F

Notes/References47:

The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you'd want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)

Reference:  Running startup scripts | Compute Engine Documentation | Google Cloud Running shutdown scripts | Compute Engine Documentation | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 48: It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?

A. Escalate the decision to the business manager responsible for the SLA

B. Take the system offline

C. Revert the system to the state it was in on Friday morning

D. Investigate the cause of the issue

ANSWER48:

B

Notes/References48:

The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn't make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we'd better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO. 

Reference: Disaster recovery – Wikipedia

Question 49: Which of the following are not processes or practices that you would associate with DevOps?

A. Raven-test the candidate

B. Obfuscate the code

C. Only one of the other options is made up

D. Run the code in your cardinal environment

E. Do a canary deploy

ANSWER49:

A and D

Notes/References49:

Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you're making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization's source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn't a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.

Reference: Intro to deployment strategies: blue-green, canary, and more – DEV Community ‍‍

Question 50: Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?

A. Object lifecycle management

B. BigQuery Slots

C. Committed use discounts

D. Sustained use discounts

E. Managed instance group autoscaling

F. Pub/Sub topic centralization

ANSWER50:

B and C

Notes/References50:

Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn't really a thing). Sustained use discounts can reduce costs, but that's not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE's Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don't use them; you've already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won't pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex. 

Reference: CapEx vs OpEx: Capital Expenses and Operating Expenses Explained – BMC Blogs Sustained use discounts | Compute Engine Documentation | Google Cloud Committed use discounts | Compute Engine Documentation | Google Cloud Slots | BigQuery | Google Cloud Autoscaling groups of instances | Compute Engine Documentation Object Lifecycle Management | Cloud Storage | Google Cloud


Question 51: In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?

A. The scrum master is the one who decides

B. The lead architect should get the final say

C. The product owner should get the final say

D. You should put it to a vote of key stakeholders

E. You should put it to a vote of all stakeholders

ANSWER51:

C

Notes/References51:

In Scrum, it is the Product Owner's role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven't ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context. 

Reference: Scrum Guide | Scrum Guides

Question 52: Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?

A. Split testing

B. Red-Black

C. A/B

D. Canary

E. Rolling

F. Blue-Green

G. Flex downtime

ANSWER52:

D and E

Notes/References52:

A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application. 

Reference: BlueGreenDeployment design patterns – What's the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow design patterns – What's the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow What is rolling deployment? – Definition from WhatIs.com A/B testing – Wikipedia

Question 53: You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?

A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on

B. Hiring and retaining 10X developers is critical to project success

C. A key goal of a proper post-mortem is to identify what processes need to be changed

D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project

E. A key goal of a proper post-mortem is to determine who needs additional training

ANSWER53:

A and C

Notes/References53:

There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future. 

Reference: 403 Forbidden 403 Forbidden Google – Site Reliability Engineering The Cone of Uncertainty

Question 54: Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. The next month’s SLO will be reduced.

C. Your client(s) will have to pay you extra.

D. You will have to pay your client(s).

E. There is no impact on payments.

F. There is not enough information to make a determination.

ANSWER54:

D

Notes/References54:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What's the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia


Question 55: Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. There is no impact on payments.

C. There is not enough information to make a determination.

D. Your client(s) will have to pay you extra.

E. The next month’s SLO will be reduced.

F. You will have to pay your client(s).

ANSWER55:

B

Notes/References55:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What's the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 56: For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?

A. Configure the autoscaling thresholds to follow changing load

B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand

C. Run cron jobs on their application servers to scale down at night and up in the morning

D. Use Cloud Load Balancing to balance the traffic highs and lows

D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning

E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace

ANSWER56:

A

Notes/References56:

The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over. 

Reference: Load Balancing | Google Cloud Google Cloud Platform Marketplace Solutions Cloud Functions | Google Cloud Cloud Scheduler | Google Cloud


Google Cloud Latest News, Questions and Answers online:

Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.

With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.

With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem! Cloud Run is made for Devops engineers, App Engine is made for developers. Read more here…


Cloud Run VS Cloud Functions: What to consider?

The best choice depends on what you want to opti