Top 100 Data Science and Data Analytics Interview Questions and Answers

Data Science Bias Variance Trade-off
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Below and the Top 100 Data Science and Data Analytics Interview Questions and Answers dumps.

What is Data Science? 

Data Science is a blend of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data. How is this different from what statisticians have been doing for years? The answer lies in the difference between explaining and predicting: statisticians work a posteriori, explaining the results and designing a plan; data scientists use historical data to make predictions.

How does data cleaning play a vital role in the analysis? 

Data cleaning can help in analysis because:

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

What is linear regression? What do the terms p-value, coefficient, and r-squared value mean? What is the significance of each of these components?

Reference  

Imagine you want to predict the price of a house. That will depend on some factors, called independent variables, such as location, size, year of construction… if we assume there is a linear relationship between these variables and the price (our dependent variable), then our price is predicted by the following function: Y = a + bX
The p-value in the table is the minimum I (the significance level) at which the coefficient is relevant. The lower the p-value, the more important is the variable in predicting the price. Usually we set a 5% level, so that we have a 95% confidentiality that our variable is relevant.
The p-value is used as an alternative to rejection points to provide the smallest level of significance at which the null hypothesis would be rejected. A smaller p-value means that there is stronger evidence in favor of the alternative hypothesis.
The coefficient value signifies how much the mean of the dependent variable changes given a one-unit shift in the independent variable while holding other variables in the model constant. This property of holding the other variables constant is crucial because it allows you to assess the effect of each variable in isolation from the others.
R squared (R2) is a statistical measure that represents the proportion of the variance for a dependent variable that’s explained by an independent variable or variables in a regression model.

Credit: Steve Nouri

What is sampling? How many sampling methods do you know? 

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Reference

Data sampling is a statistical analysis technique used to select, manipulate and analyze a representative subset of data points to identify patterns and trends in the larger data set being examined. It enables data scientists, predictive modelers and other data analysts to work with a small, manageable amount of data about a statistical population to build and run analytical models more quickly, while still producing accurate findings.

Sampling can be particularly useful with data sets that are too large to efficiently analyze in full – for example, in big data analytics applications or surveys. Identifying and analyzing a representative sample is more efficient and cost-effective than surveying the entirety of the data or population.
An important consideration, though, is the size of the required data sample and the possibility of introducing a sampling error. In some cases, a small sample can reveal the most important information about a data set. In others, using a larger sample can increase the likelihood of accurately representing the data as a whole, even though the increased size of the sample may impede ease of manipulation and interpretation.
There are many different methods for drawing samples from data; the ideal one depends on the data set and situation. Sampling can be based on probability, an approach that uses random numbers that correspond to points in the data set to ensure that there is no correlation between points chosen for the sample. Further variations in probability sampling include:

Simple random sampling: Software is used to randomly select subjects from the whole population.
• Stratified sampling: Subsets of the data sets or population are created based on a common factor,
and samples are randomly collected from each subgroup. A sample is drawn from each strata (using a random sampling method like simple random sampling or systematic sampling).
o EX: In the image below, let’s say you need a sample size of 6. Two members from each
group (yellow, red, and blue) are selected randomly. Make sure to sample proportionally:
In this simple example, 1/3 of each group (2/6 yellow, 2/6 red and 2/6 blue) has been
sampled. If you have one group that’s a different size, make sure to adjust your
proportions. For example, if you had 9 yellow, 3 red and 3 blue, a 5-item sample would
consist of 3/9 yellow (i.e. one third), 1/3 red and 1/3 blue.
• Cluster sampling: The larger data set is divided into subsets (clusters) based on a defined factor, then a random sampling of clusters is analyzed. The sampling unit is the whole cluster; Instead of sampling individuals from within each group, a researcher will study whole clusters.
o EX: In the image below, the strata are natural groupings by head color (yellow, red, blue).
A sample size of 6 is needed, so two of the complete strata are selected randomly (in this
example, groups 2 and 4 are chosen).

Data Science Stratified Sampling - Cluster Sampling
Data Science Stratified Sampling – Cluster Sampling

– Cluster Sampling[/caption]

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Multistage sampling: A more complicated form of cluster sampling, this method also involves dividing the larger population into a number of clusters. Second-stage clusters are then broken out based on a secondary factor, and those clusters are then sampled and analyzed. This staging could continue as multiple subsets are identified, clustered and analyzed.
    • Systematic sampling: A sample is created by setting an interval at which to extract data from the larger population – for example, selecting every 10th row in a spreadsheet of 200 items to create a sample size of 20 rows to analyze.

Sampling can also be based on non-probability, an approach in which a data sample is determined and extracted based on the judgment of the analyst. As inclusion is determined by the analyst, it can be more difficult to extrapolate whether the sample accurately represents the larger population than when probability sampling is used.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Non-probability data sampling methods include:
• Convenience sampling: Data is collected from an easily accessible and available group.
• Consecutive sampling: Data is collected from every subject that meets the criteria until the predetermined sample size is met.
• Purposive or judgmental sampling: The researcher selects the data to sample based on predefined criteria.
• Quota sampling: The researcher ensures equal representation within the sample for all subgroups in the data set or population (random sampling is not used).

Quota sampling
Quota sampling

Once generated, a sample can be used for predictive analytics. For example, a retail business might use data sampling to uncover patterns about customer behavior and predictive modeling to create more effective sales strategies.

Credit: Steve Nouri

What are the assumptions required for linear regression?

There are four major assumptions:

There is a linear relationship between the dependent variables and the regressors, meaning the model you are creating actually fits the data,
• The errors or residuals of the data are normally distributed and independent from each other,
• There is minimal multicollinearity between explanatory variables, and
• Homoscedasticity. This means the variance around the regression line is the same for all values of the predictor variable.

What is a statistical interaction?

Reference: Statistical Interaction

Basically, an interaction is when the effect of one factor (input variable) on the dependent variable (output variable) differs among levels of another factor. When two or more independent variables are involved in a research design, there is more to consider than simply the “main effect” of each of the independent variables (also termed “factors”). That is, the effect of one independent variable on the dependent variable of interest may not be the same at all levels of the other independent variable. Another way to put this is that the effect of one independent variable may depend on the level of the other independent
variable. In order to find an interaction, you must have a factorial design, in which the two (or more) independent variables are “crossed” with one another so that there are observations at every
combination of levels of the two independent variables. EX: stress level and practice to memorize words: together they may have a lower performance. 

What is selection bias? 

Reference

Selection (or ‘sampling’) bias occurs when the sample data that is gathered and prepared for modeling has characteristics that are not representative of the true, future population of cases the model will see.
That is, active selection bias occurs when a subset of the data is systematically (i.e., non-randomly) excluded from analysis.

Selection bias is a kind of error that occurs when the researcher decides what has to be studied. It is associated with research where the selection of participants is not random. Therefore, some conclusions of the study may not be accurate.

The types of selection bias include:
Sampling bias: It is a systematic error due to a non-random sample of a population causing some members of the population to be less likely to be included than others resulting in a biased sample.
Time interval: A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.
Data: When specific subsets of data are chosen to support a conclusion or rejection of bad data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
Attrition: Attrition bias is a kind of selection bias caused by attrition (loss of participants)
discounting trial subjects/tests that did not run to completion.

What is an example of a data set with a non-Gaussian distribution?

Reference

The Gaussian distribution is part of the Exponential family of distributions, but there are a lot more of them, with the same sort of ease of use, in many cases, and if the person doing the machine learning has a solid grounding in statistics, they can be utilized where appropriate.

Binomial: multiple toss of a coin Bin(n,p): the binomial distribution consists of the probabilities of each of the possible numbers of successes on n trials for independent events that each have a probability of p of
occurring.

Bernoulli: Bin(1,p) = Be(p)
Poisson: Pois(λ)

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

What is a confusion matrix?

The confusion matrix is a 2X2 table that contains 4 outputs provided by the binary classifier.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

 

A data set used for performance evaluation is called a test data set. It should contain the correct labels and predicted labels. The predicted labels will exactly the same if the performance of a binary classifier is perfect. The predicted labels usually match with part of the observed labels in real-world scenarios.
A binary classifier predicts all data instances of a test data set as either positive or negative. This produces four outcomes: TP, FP, TN, FN. Basic measures derived from the confusion matrix:

What is the difference between “long” and “wide” format data?

In the wide-format, a subject’s repeated responses will be in a single row, and each response is in a separate column. In the long-format, each row is a one-time point per subject. You can recognize data in wide format by the fact that columns generally represent groups (variables).

difference between “long” and “wide” format data

What do you understand by the term Normal Distribution?

Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell-shaped curve.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
Data Science: Normal Distribution

The random variables are distributed in the form of a symmetrical, bell-shaped curve. Properties of Normal Distribution are as follows:

1. Unimodal (Only one mode)
2. Symmetrical (left and right halves are mirror images)
3. Bell-shaped (maximum height (mode) at the mean)
4. Mean, Mode, and Median are all located in the center
5. Asymptotic

What is correlation and covariance in statistics?

Correlation is considered or described as the best technique for measuring and also for estimating the quantitative relationship between two variables. Correlation measures how strongly two variables are related. Given two random variables, it is the covariance between both divided by the product of the two standard deviations of the single variables, hence always between -1 and 1.

correlation and covariance

Covariance is a measure that indicates the extent to which two random variables change in cycle. It explains the systematic relation between a pair of random variables, wherein changes in one variable reciprocal by a corresponding change in another variable.

correlation and covariance in statistics

What is the difference between Point Estimates and Confidence Interval? 

Point Estimation gives us a particular value as an estimate of a population parameter. Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters.

A confidence interval gives us a range of values which is likely to contain the population parameter. The confidence interval is generally preferred, as it tells us how likely this interval is to contain the population parameter. This likeliness or probability is called Confidence Level or Confidence coefficient and represented by 1 − ∝, where ∝ is the level of significance.

What is the goal of A/B Testing?

It is a hypothesis testing for a randomized experiment with two variables A and B.
The goal of A/B Testing is to identify any changes to the web page to maximize or increase the outcome of interest. A/B testing is a fantastic method for figuring out the best online promotional and marketing strategies for your business. It can be used to test everything from website copy to sales emails to search ads. An example of this could be identifying the click-through rate for a banner ad.

What is p-value?

When you perform a hypothesis test in statistics, a p-value can help you determine the strength of your results. p-value is the minimum significance level at which you can reject the null hypothesis. The lower the p-value, the more likely you reject the null hypothesis.

What do you understand by statistical power of sensitivity and how do you calculate it? 

Sensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, Random Forest etc.). Sensitivity = [ TP / (TP +TN)]

Why is Re-sampling done?

A Gentle Introduction to Statistical Sampling and Resampling

  • Sampling is an active process of gathering observations with the intent of estimating a population variable.
  • Resampling is a methodology of economically using a data sample to improve the accuracy and quantify the uncertainty of a population parameter. Resampling methods, in fact, make use of a nested resampling method.

Once we have a data sample, it can be used to estimate the population parameter. The problem is that we only have a single estimate of the population parameter, with little idea of the variability or uncertainty in the estimate. One way to address this is by estimating the population parameter multiple times from our data sample. This is called resampling. Statistical resampling methods are procedures that describe how to economically use available data to estimate a population parameter. The result can be both a more accurate estimate of the parameter (such as taking the mean of the estimates) and a quantification of the uncertainty of the estimate (such as adding a confidence interval).

Resampling methods are very easy to use, requiring little mathematical knowledge. A downside of the methods is that they can be computationally very expensive, requiring tens, hundreds, or even thousands of resamples in order to develop a robust estimate of the population parameter.

The key idea is to resample from the original data — either directly or via a fitted model — to create replicate datasets, from which the variability of the quantiles of interest can be assessed without longwinded and error-prone analytical calculation. Because this approach involves repeating the original data analysis procedure with many replicate sets of data, these are sometimes called computer-intensive methods. Each new subsample from the original data sample is used to estimate the population parameter. The sample of estimated population parameters can then be considered with statistical tools in order to quantify the expected value and variance, providing measures of the uncertainty of the
estimate. Statistical sampling methods can be used in the selection of a subsample from the original sample.

A key difference is that process must be repeated multiple times. The problem with this is that there will be some relationship between the samples as observations that will be shared across multiple subsamples. This means that the subsamples and the estimated population parameters are not strictly identical and independently distributed. This has implications for statistical tests performed on the sample of estimated population parameters downstream, i.e. paired statistical tests may be required. 

Two commonly used resampling methods that you may encounter are k-fold cross-validation and the bootstrap.

  • Bootstrap. Samples are drawn from the dataset with replacement (allowing the same sample to appear more than once in the sample), where those instances not drawn into the data sample may be used for the test set.
  • k-fold Cross-Validation. A dataset is partitioned into k groups, where each group is given the opportunity of being used as a held out test set leaving the remaining groups as the training set. The k-fold cross-validation method specifically lends itself to use in the evaluation of predictive models that are repeatedly trained on one subset of the data and evaluated on a second held-out subset of the data.  

Resampling is done in any of these cases:

  • Estimating the accuracy of sample statistics by using subsets of accessible data or drawing randomly with replacement from a set of data points
  • Substituting labels on data points when performing significance tests
  • Validating models by using random subsets (bootstrapping, cross-validation)

What are the differences between over-fitting and under-fitting?

In statistics and machine learning, one of the most common tasks is to fit a model to a set of training data, so as to be able to make reliable predictions on general untrained data.

In overfitting, a statistical model describes random error or noise instead of the underlying relationship.
Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data.

Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data.
Such a model too would have poor predictive performance.

How to combat Overfitting and Underfitting?

To combat overfitting:
1. Add noise
2. Feature selection
3. Increase training set
4. L2 (ridge) or L1 (lasso) regularization; L1 drops weights, L2 no
5. Use cross-validation techniques, such as k folds cross-validation
6. Boosting and bagging
7. Dropout technique
8. Perform early stopping
9. Remove inner layers
To combat underfitting:
1. Add features
2. Increase time of training

What is regularization? Why is it useful?

Regularization is the process of adding tuning parameter (penalty term) to a model to induce smoothness in order to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1 (Lasso – |∝|) or L2 (Ridge – ∝2). The model predictions should then minimize the loss function calculated on the regularized training set.

What Is the Law of Large Numbers? 

It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.

What Are Confounding Variables?

In statistics, a confounder is a variable that influences both the dependent variable and independent variable.

If you are researching whether a lack of exercise leads to weight gain:
lack of exercise = independent variable
weight gain = dependent variable
A confounding variable here would be any other variable that affects both of these variables, such as the age of the subject.

What is Survivorship Bias?

It is the logical error of focusing aspects that support surviving some process and casually overlooking those that did not work because of their lack of prominence. This can lead to wrong conclusions in numerous different means. For example, during a recession you look just at the survived businesses, noting that they are performing poorly. However, they perform better than the rest, which is failed, thus being removed from the time series.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Explain how a ROC curve works?

The ROC curve is a graphical representation of the contrast between true positive rates and false positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity (true positive rate) and false positive rate.

Data Science ROC Curve

What is TF/IDF vectorization?

TF-IDF is short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining.

Data Science TF IDF Vectorization

The TF-IDF value increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.

Python or R – Which one would you prefer for text analytics?

We will prefer Python because of the following reasons:
• Python would be the best option because it has Pandas library that provides easy to use data structures and high-performance data analysis tools.
• R is more suitable for machine learning than just text analysis.
• Python performs faster for all types of text analytics.

How does data cleaning play a vital role in the analysis? 

Data cleaning can help in analysis because:

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

Differentiate between univariate, bivariate and multivariate analysis. 

Univariate analyses are descriptive statistical analysis techniques which can be differentiated based on one variable involved at a given point of time. For example, the pie charts of sales based on territory involve only one variable and can the analysis can be referred to as univariate analysis.

The bivariate analysis attempts to understand the difference between two variables at a time as in a scatterplot. For example, analyzing the volume of sale and spending can be considered as an example of bivariate analysis.

Multivariate analysis deals with the study of more than two variables to understand the effect of variables on the responses.

Explain Star Schema

It is a traditional database schema with a central table. Satellite tables map IDs to physical names or descriptions and can be connected to the central fact table using the ID fields; these tables are known as lookup tables and are principally useful in real-time applications, as they save a lot of memory. Sometimes star schemas involve several layers of summarization to recover information faster.

What is Cluster Sampling?

Cluster sampling is a technique used when it becomes difficult to study the target population spread across a wide area and simple random sampling cannot be applied. Cluster Sample is a probability sample where each sampling unit is a collection or cluster of elements.

For example, a researcher wants to survey the academic performance of high school students in Japan. He can divide the entire population of Japan into different clusters (cities). Then the researcher selects a number of clusters depending on his research through simple or systematic random sampling.

What is Systematic Sampling? 

Systematic sampling is a statistical technique where elements are selected from an ordered sampling frame. In systematic sampling, the list is progressed in a circular manner so once you reach the end of the list, it is progressed from the top again. The best example of systematic sampling is equal probability method.

What are Eigenvectors and Eigenvalues? 

Eigenvectors are used for understanding linear transformations. In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix. Eigenvectors are the directions along which a particular linear transformation acts by flipping, compressing or stretching.
Eigenvalue can be referred to as the strength of the transformation in the direction of eigenvector or the factor by which the compression occurs.

Give Examples where a false positive is important than a false negative?

Let us first understand what false positives and false negatives are:

  • False Positives are the cases where you wrongly classified a non-event as an event a.k.a Type I error
  • False Negatives are the cases where you wrongly classify events as non-events, a.k.a Type II error.

Example 1: In the medical field, assume you have to give chemotherapy to patients. Assume a patient comes to that hospital and he is tested positive for cancer, based on the lab prediction but he actually doesn’t have cancer. This is a case of false positive. Here it is of utmost danger to start chemotherapy on this patient when he actually does not have cancer. In the absence of cancerous cell, chemotherapy will do certain damage to his normal healthy cells and might lead to severe diseases, even cancer.

Example 2: Let’s say an e-commerce company decided to give $1000 Gift voucher to the customers whom they assume to purchase at least $10,000 worth of items. They send free voucher mail directly to 100 customers without any minimum purchase condition because they assume to make at least 20% profit on sold items above $10,000. Now the issue is if we send the $1000 gift vouchers to customers who have not actually purchased anything but are marked as having made $10,000 worth of purchase

Give Examples where a false negative important than a false positive? And vice versa?

Example 1 FN: What if Jury or judge decides to make a criminal go free?

Example 2 FN: Fraud detection.

Example 3 FP: customer voucher use promo evaluation: if many used it and actually if was not true, promo sucks

Give Examples where both false positive and false negatives are equally important? 

In the Banking industry giving loans is the primary source of making money but at the same time if your repayment rate is not good you will not make any profit, rather you will risk huge losses.
Banks don’t want to lose good customers and at the same point in time, they don’t want to acquire bad customers. In this scenario, both the false positives and false negatives become very important to measure.

What is the Difference between a Validation Set and a Test Set?

A Training Set:
• to fit the parameters i.e. weights

A Validation set:
• part of the training set
• for parameter selection
• to avoid overfitting

A Test set:
• for testing or evaluating the performance of a trained machine learning model, i.e. evaluating the
predictive power and generalization.

What is cross-validation?

Reference: k-fold cross validation 

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation. Mainly used in backgrounds where the objective is forecast, and one wants to estimate how accurately a model will accomplish in practice.

Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

The general procedure is as follows:
1. Shuffle the dataset randomly.
2. Split the dataset into k groups
3. For each unique group:
a. Take the group as a hold out or test data set
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model
4. Summarize the skill of the model using the sample of model evaluation scores

Data Science Cross Validation

There is an alternative in Scikit-Learn called Stratified k fold, in which the split is shuffled to make it sure you have a representative sample of each class and a k fold in which you may not have the assurance of it (not good with a very unbalanced dataset).

What is Machine Learning?

Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data. You select a model to train and then manually perform feature extraction. Used to devise complex models and algorithms that lend themselves to a prediction which in commercial use is known as predictive analytics.

What is Supervised Learning? 

Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples.

Algorithms: Support Vector Machines, Regression, Naive Bayes, Decision Trees, K-nearest Neighbor Algorithm and Neural Networks

Example: If you built a fruit classifier, the labels will be “this is an orange, this is an apple and this is a banana”, based on showing the classifier examples of apples, oranges and bananas.

What is Unsupervised learning?

Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.

Algorithms: Clustering, Anomaly Detection, Neural Networks and Latent Variable Models

Example: In the same example, a fruit clustering will categorize as “fruits with soft skin and lots of dimples”, “fruits with shiny hard skin” and “elongated yellow fruits”.

What are the various Machine Learning algorithms?

Machine Learning Algorithms

What is “Naive” in a Naive Bayes?

Reference: Naive Bayes Classifier on Wikipedia

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable y and dependent feature vector X1through Xn:

Machine Learning Algorithms Naive Bayes
Machine Learning Algorithms Naive Bayes

What is PCA (Principal Component Analysis)? When do you use it?

Reference: PCA on wikipedia

Principal component analysis (PCA) is a statistical method used in Machine Learning. It consists in projecting data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension.

The process works as following. We define a matrix A with > rows (the single observations of a dataset – in a tabular format, each single row) and @ columns, our features. For this matrix we construct a variable space with as many dimensions as there are features. Each feature represents one coordinate axis. For each feature, the length has been standardized according to a scaling criterion, normally by scaling to unit variance. It is determinant to scale the features to a common scale, otherwise the features with a greater magnitude will weigh more in determining the principal components. Once plotted all the observations and computed the mean of each variable, that mean will be represented by a point in the center of our plot (the center of gravity). Then, we subtract each observation with the mean, shifting the coordinate system with the center in the origin. The best fitting line resulting is the line that best accounts for the shape of the point swarm. It represents the maximum variance direction in the data. Each observation may be projected onto this line in order to get a coordinate value along the PC-line. This value is known as a score. The next best-fitting line can be similarly chosen from directions perpendicular to the first.
Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components.

Machine Learning Algorithms PCA

PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.

SVM (Support Vector Machine)  algorithm

Reference: SVM on wikipedia

Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of supportvector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So, we
choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. The best hyper plane that divides the data is H3.

  • SVMs are helpful in text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
  • Some methods for shallow semantic parsing are based on support vector machines.
  • Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.
  • Classification of satellite data like SAR data using supervised SVM.
  • Hand-written characters can be recognized using SVM.

What are the support vectors in SVM? 

Machine Learning Algorithms Support Vectors

In the diagram, we see that the sketched lines mark the distance from the classifier (the hyper plane) to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin.

To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, max (0, 1 – yi(w∙ xi − b)). This function is zero if x lies on the correct side of the margin. For data on the wrong side of the margin, the function’s value is proportional to the distance from the margin. 

What are the different kernels in SVM?

There are four types of kernels in SVM.
1. LinearKernel
2. Polynomial kernel
3. Radial basis kernel
4. Sigmoid kernel

What are the most known ensemble algorithms? 

Reference: Ensemble Algorithms

The most popular trees are: AdaBoost, Random Forest, and  eXtreme Gradient Boosting (XGBoost).

AdaBoost is best used in a dataset with low noise, when computational complexity or timeliness of results is not a main concern and when there are not enough resources for broader hyperparameter tuning due to lack of time and knowledge of the user.

Random forests should not be used when dealing with time series data or any other data where look-ahead bias should be avoided, and the order and continuity of the samples need to be ensured. This algorithm can handle noise relatively well, but more knowledge from the user is required to adequately tune the algorithm compared to AdaBoost.

The main advantages of XGBoost is its lightning speed compared to other algorithms, such as AdaBoost, and its regularization parameter that successfully reduces variance. But even aside from the regularization parameter, this algorithm leverages a learning rate (shrinkage) and subsamples from the features like random forests, which increases its ability to generalize even further. However, XGBoost is more difficult to understand, visualize and to tune compared to AdaBoost and random forests. There is a multitude of hyperparameters that can be tuned to increase performance.

What is Deep Learning?

Deep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the neurons in the human brain.

Deep Learning

What is the difference between machine learning and deep learning?

Deep learning & Machine learning: what’s the difference?

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorized in the following four categories.
1. Supervised machine learning,
2. Semi-supervised machine learning,
3. Unsupervised machine learning,
4. Reinforcement learning.

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

Machine Learning vs Deep Learning

• The main difference between deep learning and machine learning is due to the way data is
presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks).

• Machine learning algorithms are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets. However, when the result is incorrect, there is a need to “teach them”. Because machine learning algorithms require bulleted data, they are not suitable for solving complex queries that involve a huge amount of data.

• Deep learning networks do not require human intervention, as multilevel layers in neural
networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes. However, even they can be wrong if the data quality is not good enough.

• Data decides everything. It is the quality of the data that ultimately determines the quality of the result.

• Both of these subsets of AI are somehow connected to data, which makes it possible to represent a certain form of “intelligence.” However, you should be aware that deep learning requires much more data than a traditional machine learning algorithm. The reason for this is that deep learning networks can identify different elements in neural network layers only when more than a million data points interact. Machine learning algorithms, on the other hand, are capable of learning by pre-programmed criteria.

What is the reason for the popularity of Deep Learning in recent times? 

Now although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons:
• The increase in the amount of data generated through various sources
• The growth in hardware resources required to run these models
GPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously

What is reinforcement learning?

Reinforcement Learning allows to take actions to max cumulative reward. It learns by trial and error through reward/penalty system. Environment rewards agent so by time agent makes better decisions.
Ex: robot=agent, maze=environment. Used for complex tasks (self-driving cars, game AI).

RL is a series of time steps in a Markov Decision Process:

1. Environment: space in which RL operates
2. State: data related to past action RL took
3. Action: action taken
4. Reward: number taken by agent after last action
5. Observation: data related to environment: can be visible or partially shadowed

What are Artificial Neural Networks?

Artificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input, so the network generates the best possible result without needing to redesign the output criteria.

Artificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.

Machine Learning Artificial Neural Network

How Are Weights Initialized in a Network?

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

What Is the Cost Function? 

Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions.
The most known one is the mean sum of squared errors.

Machine Learning Cost Function

What Are Hyperparameters?

With neural networks, you’re usually working with hyperparameters once the data is formatted correctly.
A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, batches, etc.).

What Will Happen If the Learning Rate is Set inaccurately (Too Low or Too High)? 

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.
If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

What Is The Difference Between Epoch, Batch, and Iteration in Deep Learning? 

Epoch – Represents one iteration over the entire dataset (everything put into the training model).
Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.
Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).

What Are the Different Layers on CNN?

Reference: Layers of CNN 

Machine Learning Layers of CNN

The Convolutional neural networks are regularized versions of multilayer perceptron (MLP). They were developed based on the working of the neurons of the animal visual cortex.

The objective of using the CNN:

The idea is that you give the computer this array of numbers and it will output numbers that describe the probability of the image being a certain class (.80 for a cat, .15 for a dog, .05 for a bird, etc.). It works similar to how our brain works. When we look at a picture of a dog, we can classify it as such if the picture has identifiable features such as paws or 4 legs. In a similar way, the computer is able to perform image classification by looking for low-level features such as edges and curves and then building up to more abstract concepts through a series of convolutional layers. The computer uses low-level features obtained at the initial levels to generate high-level features such as paws or eyes to identify the object.

There are four layers in CNN:
1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.
2. Activation Layer (ReLU Layer) – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map. It follows each convolutional layer.
3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map. Stride = how much you slide, and you get the max of the n x n matrix
4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.

Q60: What Is Pooling on CNN, and How Does It Work?

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

What are Recurrent Neural Networks (RNNs)? 

Reference: RNNs

RNNs are a type of artificial neural networks designed to recognize the pattern from the sequence of data such as Time series, stock market and government agencies etc.

Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a ‘series’ type input with no predetermined size.

Machine Learning RNN

RNNs are designed to take a series of input with no predetermined limit on size. One could ask what’s\ the big deal, I can call a regular NN repeatedly too?

Machine Learning Regular NN

Sure can, but the ‘series’ part of the input means something. A single input item from the series is related to others and likely has an influence on its neighbors. Otherwise it’s just “many” inputs, not a “series” input (duh!).
Recurrent Neural Network remembers the past and its decisions are influenced by what it has learnt from the past. Note: Basic feed forward networks “remember” things too, but they remember things they learnt during training. For example, an image classifier learns what a “1” looks like during training and then uses that knowledge to classify things in production.
While RNNs learn similarly while training, in addition, they remember things learnt from prior input(s) while generating output(s). RNNs can take one or more input vectors and produce one or more output vectors and the output(s) are influenced not just by weights applied on inputs like a regular NN, but also by a “hidden” state vector representing the context based on prior input(s)/output(s). So, the same input could produce a different output depending on previous inputs in the series.

Machine Learning Vanilla NN

In summary, in a vanilla neural network, a fixed size input vector is transformed into a fixed size output vector. Such a network becomes “recurrent” when you repeatedly apply the transformations to a series of given input and produce a series of output vectors. There is no pre-set limitation to the size of the vector. And, in addition to generating the output which is a function of the input and hidden state, we update the hidden state itself based on the input and use it in processing the next input.

What is the role of the Activation Function?

The Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.

Machine Learning libraries for various purposes

Machine Learning Libraries

What is an Auto-Encoder?

Reference: Auto-Encoder

Auto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabeled input which is then encoded to reconstruct the input. 

An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties.
Autoencoders are effectively used for solving many applied problems, from face recognition to acquiring the semantic meaning of words.

Machine Learning Auto_Encoder

What is a Boltzmann Machine?

Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimize the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.

Machine Learning Boltzmann Machine

What Is Dropout and Batch Normalization?

Dropout is a technique of dropping out hidden and visible nodes of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network. It used to avoid overfitting, as it increases the capacity of generalization.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one

Why Is TensorFlow the Most Preferred Library in Deep Learning?

TensorFlow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and PyTorch. TensorFlow supports both CPU and GPU computing devices.

What is Tensor in TensorFlow?

A tensor is a mathematical object represented as arrays of higher dimensions. Think of a n-D matrix. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

What is the Computational Graph?

Everything in a TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates. Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

What is logistic regression?

• Logistic Regression models a function of the target variable as a linear combination of the predictors, then converts this function into a fitted value in the desired range.

• Binary or Binomial Logistic Regression can be understood as the type of Logistic Regression that deals with scenarios wherein the observed outcomes for dependent variables can be only in binary, i.e., it can have only two possible types.

• Multinomial Logistic Regression works in scenarios where the outcome can have more than two possible types – type A vs type B vs type C – that are not in any particular order.

No alternative text description for this image

No alternative text description for this image

Credit:

How is logistic regression done? 

Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).

Explain the steps in making a decision tree. 

1. Take the entire data set as input
2. Calculate entropy of the target variable, as well as the predictor attributes
3. Calculate your information gain of all attributes (we gain information on sorting different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch is finalized
For example, let’s say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:

Machine Learning Decision Tree

It is clear from the decision tree that an offer is accepted if:
• Salary is greater than $50,000
• The commute is less than an hour
• Coffee is offered

How do you build a random forest model?

A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.

Steps to build a random forest model:

1. Randomly select ; features from a total of = features where  k<< m
2. Among the ; features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for > times to create > number of trees

Differentiate between univariate, bivariate, and multivariate analysis. 

Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.

Machine Learning Univariate Data

The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.

Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.

Bivariate data

Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.

Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.

Example: data for house price prediction
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.

What are the feature selection methods used to select the right variables?

There are two main methods for feature selection.
Filter Methods
This involves:
• Linear discrimination analysis
• ANOVA
• Chi-Square
The best analogy for selecting features is “bad data in, bad answer out.” When we’re limiting or selecting the features, it’s all about cleaning up the data coming in.

Wrapper Methods
This involves:
• Forward Selection: We test one feature at a time and keep adding them until we get a good fit
• Backward Selection: We test all the features and start removing them to see what works
better
• Recursive Feature Elimination: Recursively looks through all the different features and how they pair together

Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.

You are given a data set consisting of variables with more than 30 percent missing values. How will you deal with them? 

If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.

For smaller data sets, we can impute missing values with the mean, median, or average of the rest of the data using pandas data frame in python. There are different ways to do so, such as: df.mean(), df.fillna(mean)

Other option of imputation is using KNN for numeric or classification values (as KNN just uses k closest values to impute the missing value).

How will you calculate the Euclidean distance in Python?

plot1 = [1,3]

plot2 = [2,5]

The Euclidean distance can be calculated as follows:

euclidean_distance = sqrt((plot1[0]-plot2[0])**2 + (plot1[1]- plot2[1])**2)

What are dimensionality reduction and its benefits? 

Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.

This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there’s no point in storing a value in two different units (meters and inches).

How should you maintain a deployed model?

The steps to maintain a deployed model are (CREM):

1. Monitor: constant monitoring of all models is needed to determine their performance accuracy.
When you change something, you want to figure out how your changes are going to affect things.
This needs to be monitored to ensure it’s doing what it’s supposed to do.
2. Evaluate: evaluation metrics of the current model are calculated to determine if a new algorithm is needed.
3. Compare: the new models are compared to each other to determine which model performs the best.
4. Rebuild: the best performing model is re-built on the current state of data.

How can a time-series data be declared as stationery?

  1. The mean of the series should not be a function of time.
Machine Learning Stationery Time Series Data: Mean
  1. The variance of the series should not be a function of time. This property is known as homoscedasticity.
Machine Learning Stationery Time Series Data: Variance
  1. The covariance of the i th term and the (i+m) th term should not be a function of time.
Machine Learning Stationery Time Series Data: CoVariance

‘People who bought this also bought…’ recommendations seen on Amazon are a result of which algorithm?

The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.

What is a Generative Adversarial Network?

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine. The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.
The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Machine Learning GAN illustration

• There is a noise vector coming into the forger who is generating fake wine.
• Here the forger acts as a Generator.
• The shop owner acts as a Discriminator.
• The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine.
The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:
1. Generator
2. Discriminator

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images. The ultimate aim is to make the discriminator learn to identify real and fake images.

You are given a dataset on cancer detection. You have built a classification model and achieved an accuracy of 96 percent. Why shouldn’t you be happy with your model performance? What can you do about it?

Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection and can greatly improve a patient’s prognosis.

Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.

We want to predict the probability of death from heart disease based on three risk factors: age, gender, and blood cholesterol level. What is the most appropriate algorithm for this case?

The most appropriate algorithm for this case is logistic regression.

After studying the behavior of a population, you have identified four specific individual types that are valuable to your study. You would like to find all users who are most similar to each individual type. Which algorithm is most appropriate for this study? 

As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering is the most appropriate algorithm for this study.

You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be relevant. What else must be true? 

{grape, apple} must be a frequent itemset.

Your organization has a website where visitors randomly receive one of two coupons. It is also possible that visitors to the website will not receive a coupon. You have been asked to determine if offering a coupon to website visitors has any impact on their purchase decisions. Which analysis method should you use?

One-way ANOVA: in statistics, one-way analysis of variance is a technique that can be used to compare means of two or more samples. This technique can be used only for numerical response data, the “Y”, usually one variable, and numerical or categorical input data, the “X”, always one variable, hence “oneway”.
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit
theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.

What are the feature vectors?

A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that’s easy to analyze.

What is root cause analysis?

Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.

Do gradient descent methods always converge to similar points?

They do not, because in some cases, they reach a local minimum or a local optimum point. You would not reach the global optimum point. This is governed by the data and the starting conditions.

 In your choice of language, write a program that prints the numbers ranging from one to 50. But for multiples of three, print “Fizz” instead of the number and for the multiples of five, print “Buzz.” For numbers which are multiples of both three and five, print “FizzBuzz.”

Python Fibonacci algorithm

What are the different Deep Learning Frameworks?

PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.
TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team.
Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.

Data Sciences and Data Mining Glossary

Credit: Dr. Matthew North
Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Archived Data: Data which have been copied out of a live production database and into a data warehouse or other permanent system where they can be accessed and analyzed, but not by primary operational business systems.

Association Rules: A data mining methodology which compares attributes in a data set across all observations to identify areas where two or more attributes are frequently found together. If their frequency of coexistence is high enough throughout the data set, the association of those attributes can be said to be a rule.

Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values.

Binomial: A data type for any set of values that is limited to one of two numeric options.

Binominal: In RapidMiner, the data type binominal is used instead of binomial, enabling both numerical and character-based sets of values that are limited to one of two options.

Business Understanding: See Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Case Sensitive: A situation where a computer program recognizes the uppercase version of a letter or word as being different from the lowercase version of the same letter or word.

Classification: One of the two main goals of conducting data mining activities, with the other being prediction. Classification creates groupings in a data set based on the similarity of the observations’ attributes. Some data mining methodologies, such as decision trees, can predict an observation’s classification.

Code: Code is the result of a computer worker’s work. It is a set of instructions, typed in a specific grammar and syntax, that a computer can understand and execute. According to Lawrence Lessig, it is one of four methods humans can use to set and control boundaries for behavior when interacting with computer systems.

Coefficient: In data mining, a coefficient is a value that is calculated based on the values in a data set that can be used as a multiplier or as an indicator of the relative strength of some attribute or component in a data mining model.

Column: See Attribute. In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Comma Separated Values (CSV): A common text-based format for data sets where the divisions between attributes (columns of data) are indicated by commas. If commas occur naturally in some of the values in the data set, a CSV file will misunderstand these to be attribute separators, leading to misalignment of attributes.

Conclusion: See Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Confidence (Alpha) Level: A value, usually 5% or 0.05, used to test for statistical significance in some data mining methods. If statistical significance is found, a data miner can say that there is a 95% likelihood that a calculated or predicted value is not a false positive.

Confidence Percent: In predictive data mining, this is the percent of calculated confidence that the model has calculated for one or more possible predicted values. It is a measure for the likelihood of false positives in predictions. Regardless of the number of possible predicted values, their collective confidence percentages will always total to 100%.

Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Correlation: A statistical measure of the strength of affinity, based on the similarity of observational values, of the attributes in a data set. These can be positive (as one attribute’s values go up or down, so too does the correlated attribute’s values); or negative (correlated attributes’ values move in opposite directions). Correlations are indicated by coefficients which fall on a scale between -1 (complete negative correlation) and 1 (complete positive correlation), with 0 indicating no correlation at all between two attributes.

CRISP-DM: An acronym for Cross-Industry Standard Process for Data Mining. This process was jointly developed by several major multi-national corporations around the turn of the new millennium in order to standardize the approach to mining data. It is comprised of six cyclical steps: Business (Organizational) Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, Deployment.

Cross-validation: A method of statistically evaluating a training data set for its likelihood of producing false positives in a predictive data mining model.

Data: Data are any arrangement and compilation of facts. Data may be structured (e.g. arranged in columns (attributes) and rows (observations)), or unstructured (e.g. paragraphs of text, computer log file).

Data Analysis: The process of examining data in a repeatable and structured way in order to extract meaning, patterns or messages from a set of data.

Data Mart: A location where data are stored for easy access by a broad range of people in an organization. Data in a data mart are generally archived data, enabling analysis in a setting that does not impact live operations.

Data Mining: A computational process of analyzing data sets, usually large in nature, using both statistical and logical methods, in order to uncover hidden, previously unknown, and interesting patterns that can inform organizational decision making.

Data Preparation: The third in the six steps of CRISP-DM. At this stage, the data miner ensures that the data to be mined are clean and ready for mining. This may include handling outliers or other inconsistent data, dealing with missing values, reducing attributes or observations, setting attribute roles for modeling, etc.

Data Set: Any compilation of data that is suitable for analysis.

Data Type: In a data set, each attribute is assigned a data type based on the kind of data stored in the attribute. There are many data types which can be generalized into one of three areas: Character (Text) based; Numeric; and Date/Time. Within these categories, RapidMiner has several data types. For example, in the Character area, RapidMiner has Polynominal, Binominal, etc.; and in the Numeric area it has Real, Integer, etc.

Data Understanding: The second in the six steps of CRISP-DM. At this stage, the data miner seeks out sources of data in the organization, and works to collect, compile, standardize, define and document the data. The data miner develops a comprehension of where the data have come from, how they were collected and what they mean.

Data Warehouse: A large-scale repository for archived data which are available for analysis. Data in a data warehouse are often stored in multiple formats (e.g. by week, month, quarter and year), facilitating large scale analyses at higher speeds. The data warehouse is populated by extracting data from operational systems so that analyses do not interfere with live business operations.

Database: A structured organization of facts that is organized such that the facts can be reliably and repeatedly accessed. The most common type of database is a relational database, in which facts (data) are arranged in tables of columns and rows. The data are then accessed using a query language, usually SQL (Structured Query Language), in order to extract meaning from the tables.

Decision Tree: A data mining methodology where leaves and nodes are generated to construct a predictive tree, whereby a data miner can see the attributes which are most predictive of each possible outcome in a target (label) attribute.

Denormalization: The process of removing relational organization from data, reintroducing redundancy into the data, but simultaneously eliminating the need for joins in a relational database, enabling faster querying.

Dependent Variable (Attribute): The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Deployment: The sixth and final of the six steps of CRISP-DM. At this stage, the data miner takes the results of data mining activities and puts them into practice in the organization. The data miner watches closely and collects data to determine if the deployment is successful and ethical. Deployment can happen in stages, such as through pilot programs before a full-scale roll out.

Descartes’ Rule of Change: An ethical framework set forth by Rene Descartes which states that if an action cannot be taken repeatedly, it cannot be ethically taken even once.

Design Perspective: The view in RapidMiner where a data miner adds operators to a data mining stream, sets those operators’ parameters, and runs the model.

Discriminant Analysis: A predictive data mining model which attempts to compare the values of all observations across all attributes and identify where natural breaks occur from one category to another, and then predict which category each observation in the data set will fall into.

Ethics: A set of moral codes or guidelines that an individual develops to guide his or her decision making in order to make fair and respectful decisions and engage in right actions. Ethical standards are higher than legally required minimums.

Evaluation: The fifth of the six steps of CRISP-DM. At this stage, the data miner reviews the results of the data mining model, interprets results and determines how useful they are. He or she may also conduct an investigation into false positives or other potentially misleading results.

False Positive: A predicted value that ends up not being correct.

Field: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Frequency Pattern: A recurrence of the same, or similar, observations numerous times in a single data set.

Fuzzy Logic: A data mining concept often associated with neural networks where predictions are made using a training data set, even though some uncertainty exists regarding the data and a model’s predictions.

Gain Ratio: One of several algorithms used to construct decision tree models.

Gini Index: An algorithm created by Corrodo Gini that can be used to generate decision tree models.

Heterogeneity: In statistical analysis, this is the amount of variety found in the values of an attribute.

Inconsistent Data: These are values in an attribute in a data set that are out-of-the-ordinary among the whole set of values in that attribute. They can be statistical outliers, or other values that simply don’t make sense in the context of the ‘normal’ range of values for the attribute. They are generally replaced or remove during the Data Preparation phase of CRISP-DM.

Independent Variable (Attribute): These are attributes that act on the dependent attribute (the target, or label). They are used to help predict the label in a predictive model.

Jittering: The process of adding a small, random decimal to discrete values in a data set so that when they are plotted in a scatter plot, they are slightly apart from one another, enabling the analyst to better see clustering and density.

Join: The process of connecting two or more tables in a relational database together so that their attributes can be accessed in a single query, such as in a view.

Kant’s Categorical Imperative: An ethical framework proposed by Immanuel Kant which states that if everyone cannot ethically take some action, then no one can ethically take that action.

k-Means Clustering: A data mining methodology that uses the mean (average) values of the attributes in a data set to group each observation into a cluster of other observations whose values are most similar to the mean for that cluster.

Label: In RapidMiner, this is the role that must be set in order to use an attribute as the dependent, or target, attribute in a predictive model.

Laws: These are regulatory statutes which have associated consequences that are established and enforced by a governmental agency. According to Lawrence Lessig, these are one of the four methods for establishing boundaries to define and regulate social behavior.

Leaf: In a decision tree data mining model, this is the terminal end point of a branch, indicating the predicted outcome for observations whose values follow that branch of the tree.

Linear Regression: A predictive data mining method which uses the algebraic formula for calculating the slope of a line in order to predict where a given observation will likely fall along that line.

Logistic Regression: A predictive data mining method which uses a quadratic formula to predict one of a set of possible outcomes, along with a probability that the prediction will be the actual outcome.

Markets: A socio-economic construct in which peoples’ buying, selling, and exchanging behaviors define the boundaries of acceptable or unacceptable behavior. Lawrence Lessig offers this as one of four methods for defining the parameters of appropriate behavior.

Mean: See Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values. 

Median: With the Mean and Mode, this is one of three generally used Measures of Central Tendency. It is an arithmetic way of defining what ‘normal’ looks like in a numeric attribute. It is calculated by rank ordering the values in an attribute and finding the one in the middle. If there are an even number of observations, the two in the middle are averaged to find the median.

Meta Data: These are facts that describe the observational values in an attribute. Meta data may include who collected the data, when, why, where, how, how often; and usually include some descriptive statistics such as the range, average, standard deviation, etc.

Missing Data: These are instances in an observation where one or more attributes does not have a value. It is not the same as zero, because zero is a value. Missing data are like Null values in a database, they are either unknown or undefined. These are usually replaced or removed during the Data Preparation phase of CRISP-DM.

Mode: With Mean and Median, this is one of three common Measures of Central Tendency. It is the value in an attribute which is the most common. It can be numerical or text. If an attribute contains two or more values that appear an equal number of times and more than any other values, then all are listed as the mode, and the attribute is said to be Bimodal or Multimodal.

Model: A computer-based representation of real-life events or activities, constructed upon the basis of data which represent those events.

Name (Attribute): This is the text descriptor of each attribute in a data set. In RapidMiner, the first row of an imported data set should be designated as the attribute name, so that these are not interpreted as the first observation in the data set.

Neural Network: A predictive data mining methodology which tries to mimic human brain processes by comparing the values of all attributes in a data set to one another through the use of a hidden layer of nodes. The frequencies with which the attribute values match, or are strongly similar, create neurons which become stronger at higher frequencies of similarity.

n-Gram: In text mining, this is a combination of words or word stems that represent a phrase that may have more meaning or significance that would the single word or stem.

Node: A terminal or mid-point in decision trees and neural networks where an attribute branches or forks away from other terminal or branches because the values represented at that point have become significantly different from all other values for that attribute.

Normalization: In a relational database, this is the process of breaking data out into multiple related tables in order to reduce redundancy and eliminate multivalued dependencies.

Null: The absence of a value in a database. The value is unrecorded, unknown, or undefined. See Missing Values.

Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Online Analytical Processing (OLAP): A database concept where data are collected and organized in a way that facilitates analysis, rather than practical, daily operational work. Evaluating data in a data warehouse is an example of OLAP. The underlying structure that collects and holds the data makes analysis faster, but would slow down transactional work.

Online Transaction Processing (OLTP): A database concept where data are collected and organized in a way that facilitates fast and repeated transactions, rather than broader analytical work. Scanning items being purchased at a cash register is an example of OLTP. The underlying structure that collects and holds the data makes transactions faster, but would slow down analysis.

Operational Data: Data which are generated as a result of day-to-day work (e.g. the entry of work orders for an electrical service company).

Operator: In RapidMiner, an operator is any one of more than 100 tools that can be added to a data mining stream in order to perform some function. Functions range from adding a data set, to setting an attribute’s role, to applying a modeling algorithm. Operators are connected into a stream by way of ports connected by splines.

Organizational Data: These are data which are collected by an organization, often in aggregate or summary format, in order to address a specific question, tell a story, or answer a specific question. They may be constructed from Operational Data, or added to through other means such as surveys, questionnaires or tests.

Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Parameters: In RapidMiner, these are the settings that control values and thresholds that an operator will use to perform its job. These may be the attribute name and role in a Set Role operator, or the algorithm the data miner desires to use in a model operator.

Port: The input or output required for an operator to perform its function in RapidMiner. These are connected to one another using splines.

Prediction: The target, or label, or dependent attribute that is generated by a predictive model, usually for a scoring data set in a model.

Premise: See Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Privacy: The concept describing a person’s right to be let alone; to have information about them kept away from those who should not, or do not need to, see it. A data miner must always respect and safeguard the privacy of individuals represented in the data he or she mines.

Professional Code of Conduct: A helpful guide or documented set of parameters by which an individual in a given profession agrees to abide. These are usually written by a board or panel of experts and adopted formally by a professional organization.

Query: A method of structuring a question, usually using code, that can be submitted to, interpreted, and answered by a computer.

Record: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Relational Database: A computerized repository, comprised of entities that relate to one another through keys. The most basic and elemental entity in a relational database is the table, and tables are made up of attributes. One or more of these attributes serves as a key that can be matched (or related) to a corresponding attribute in another table, creating the relational effect which reduces data redundancy and eliminates multivalued dependencies.

Repository: In RapidMiner, this is the place where imported data sets are stored so that they are accessible for modeling.

Results Perspective: The view in RapidMiner that is seen when a model has been run. It is usually comprised of two or more tabs which show meta data, data in a spreadsheet-like view, and predictions and model outcomes (including graphical representations where applicable).

Role (Attribute): In a data mining model, each attribute must be assigned a role. The role is the part the attribute plays in the model. It is usually equated to serving as an independent variable (regular), or dependent variable (label).

Row: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Sample: A subset of an entire data set, selected randomly or in a structured way. This usually reduces a data set down, allowing models to be run faster, especially during development and proof-of-concept work on a model.

Scoring Data: A data set with the same attributes as a training data set in a predictive model, with the exception of the label. The training data set, with the label defined, is used to create a predictive model, and that model is then applied to a scoring data set possessing the same attributes in order to predict the label for each scoring observation.

Social Norms: These are the sets of behaviors and actions that are generally tolerated and found to be acceptable in a society. According to Lawrence Lessig, these are one of four methods of defining and regulating appropriate behavior.

Spline: In RapidMiner, these lines connect the ports between operators, creating the stream of a data mining model.

Standard Deviation: One of the most common statistical measures of how dispersed the values in an attribute are. This measure can help determine whether or not there are outliers (a common type of inconsistent data) in a data set.

Standard Operating Procedures: These are organizational guidelines that are documented and shared with employees which help to define the boundaries for appropriate and acceptable behavior in the business setting. They are usually created and formally adopted by a group of leaders in the organization, with input from key stakeholders in the organization.

Statistical Significance: In statistically-based data mining activities, this is the measure of whether or not the model has yielded any results that are mathematically reliable enough to be used. Any model lacking statistical significance should not be used in operational decision making.

Stemming: In text mining, this is the process of reducing like-terms down into a single, common token (e.g. country, countries, country’s, countryman, etc. → countr).

Stopwords: In text mining, these are small words that are necessary for grammatical correctness, but which carry little meaning or power in the message of the text being mined. These are often articles, prepositions or conjunctions, such as ‘a’, ‘the’, ‘and’, etc., and are usually removed in the Process Document operator’s sub-process.

Stream: This is the string of operators in a data mining model, connected through the operators’ ports via splines, that represents all actions that will be taken on a data set in order to mine it.

Structured Query Language (SQL): The set of codes, reserved keywords and syntax defined by the American National Standards Institute used to create, manage and use relational databases.

Sub-process: In RapidMiner, this is a stream of operators set up to apply a series of actions to all inputs connected to the parent operator.

Support Percent: In an association rule data mining model, this is the percent of the time that when the antecedent is found in an observation, the consequent is also found. Since this is calculated as the number of times the two are found together divided by the total number of they could have been found together, the Support Percent is the same for reciprocal rules.

Table: In data collection, a table is a grid of columns and rows, where in general, the columns are individual attributes in the data set, and the rows are observations across those attributes. Tables are the most elemental entity in relational databases.

Target Attribute: See Label; Dependent Variable: The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Technology: Any tool or process invented by mankind to do or improve work.

Text Mining: The process of data mining unstructured text-based data such as essays, news articles, speech transcripts, etc. to discover patterns of word or phrase usage to reveal deeper or previously unrecognized meaning.

Token (Tokenize): In text mining, this is the process of turning words in the input document(s) into attributes that can be mined.

Training Data: In a predictive model, this data set already has the label, or dependent variable defined, so that it can be used to create a model which can be applied to a scoring data set in order to generate predictions for the latter.

Tuple: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Variable: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

View: A type of pseudo-table in a relational database which is actually a named, stored query. This query runs against one or more tables, retrieving a defined number of attributes that can then be referenced as if they were in a table in the database. Views can limit users’ ability to see attributes to only those that are relevant and/or approved for those users to see. They can also speed up the query process because although they may contain joins, the key columns for the joins can be indexed and cached, making the view’s query run faster than it would if it were not stored as a view. Views can be useful in data mining as data miners can be given read-only access to the view, upon which they can build data mining models, without having to have broader administrative rights on the database itself.

What is the Central Limit Theorem and why is it important?

An Introduction to the Central Limit Theorem

Answer: Suppose that we are interested in estimating the average height among all people. Collecting data for every person in the world is impractical, bordering on impossible. While we can’t obtain a height measurement from everyone in the population, we can still sample some people. The question now becomes, what can we say about the average height of the entire population given a single sample.
The Central Limit Theorem addresses this question exactly. Formally, it states that if we sample from a population using a sufficiently large sample size, the mean of the samples (also known as the sample population) will be normally distributed (assuming true random sampling), the mean tending to the mean of the population and variance equal to the variance of the population divided by the size of the sampling.
What’s especially important is that this will be true regardless of the distribution of the original population.

Central Limit Theorem
Central Limit Theorem: Population Distribution

As we can see, the distribution is pretty ugly. It certainly isn’t normal, uniform, or any other commonly known distribution. In order to sample from the above distribution, we need to define a sample size, referred to as N. This is the number of observations that we will sample at a time. Suppose that we choose
N to be 3. This means that we will sample in groups of 3. So for the above population, we might sample groups such as [5, 20, 41], [60, 17, 82], [8, 13, 61], and so on.
Suppose that we gather 1,000 samples of 3 from the above population. For each sample, we can compute its average. If we do that, we will have 1,000 averages. This set of 1,000 averages is called a sampling distribution, and according to Central Limit Theorem, the sampling distribution will approach a normal distribution as the sample size N used to produce it increases. Here is what our sample distribution looks like for N = 3.

Simple Mean Distribution with N=3
Simple Mean Distribution with N=3

As we can see, it certainly looks uni-modal, though not necessarily normal. If we repeat the same process with a larger sample size, we should see the sampling distribution start to become more normal. Let’s repeat the same process again with N = 10. Here is the sampling distribution for that sample size.

Sample Mean Distribution with N = 10
Sample Mean Distribution with N = 10

Credit: Steve Nouri

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

The Best Medium-Hard Data Analyst SQL Interview Questions

compiled by Google Data Analyst Zachary Thomas!

The Best Medium-Hard Data Analyst SQL Interview Questions

Self-Join Practice Problems: MoM Percent Change

Context: Oftentimes it’s useful to know how much a key metric, such as monthly active users, changes between months.
Say we have a table logins in the form:

SQL Self-Join Practice Mom Percent Change

Task: Find the month-over-month percentage change for monthly active users (MAU).

Solution:
(This solution, like other solution code blocks you will see in this doc, contains comments about SQL syntax that may differ between flavors of SQL or other comments about the solutions as listed)

SQL MoM Solution2

 

Tree Structure Labeling with SQL

Context: Say you have a table tree with a column of nodes and a column corresponding parent nodes

Task: Write SQL such that we label each node as a “leaf”, “inner” or “Root” node, such that for the nodes above we get:

A solution which works for the above example will receive full credit, although you can receive extra credit for providing a solution that is generalizable to a tree of any depth (not just depth = 2, as is the case in the example above).

Solution: This solution works for the example above with tree depth = 2, but is not generalizable beyond that.

An alternate solution, that is generalizable to any tree depth:
Acknowledgement: this more generalizable solution was contributed by Fabian Hofmann

An alternate solution, without explicit joins:
Acknowledgement: William Chargin on 5/2/20 noted that WHERE parent IS NOT NULL is needed to make this solution return Leaf instead of NULL.

Retained Users Per Month with SQL

Acknowledgement: this problem is adapted from SiSense’s “Using Self Joins to Calculate Your Retention, Churn, and Reactivation Metrics” blog post

PART 1:
Context: Say we have login data in the table logins:

Task: Write a query that gets the number of retained users per month. In this case, retention for a given month is defined as the number of users who logged in that month who also logged in the immediately previous month.

Solution:

PART 2:

Task: Now we’ll take retention and turn it on its head: Write a query to find how many users last month did not come back this month. i.e. the number of churned users

Solution:

Note that there are solutions to this problem that can use LEFT or RIGHT joins.

PART 3:
Context: You now want to see the number of active users this month who have been reactivated — in other words, users who have churned but this month they became active again. Keep in mind a user can reactivate after churning before the previous month. An example of this could be a user active in February (appears in logins), no activity in March and April, but then active again in May (appears in logins), so they count as a reactivated user for May .

Task: Create a table that contains the number of reactivated users per month.

Solution:

Cumulative Sums with SQL

Acknowledgement: This problem was inspired by Sisense’s “Cash Flow modeling in SQL” blog post
Context: Say we have a table transactions in the form:

Where cash_flow is the revenues minus costs for each day.

Task: Write a query to get cumulative cash flow for each day such that we end up with a table in the form below:

Solution using a window function (more effcient):

Alternative Solution (less efficient):

Rolling Averages with SQL

Acknowledgement: This problem is adapted from Sisense’s “Rolling Averages in MySQL and SQL Server” blog post
Note: there are different ways to compute rolling/moving averages. Here we’ll use a preceding average which means that the metric for the 7th day of the month would be the average of the preceding 6 days and that day itself.
Context: Say we have table signups in the form:

Task: Write a query to get 7-day rolling (preceding) average of daily sign ups

Solution1:

Solution2: (using windows, more efficient)

Multiple Join Conditions in SQL

Acknowledgement: This problem was inspired by Sisense’s “Analyzing Your Email with SQL” blog post
Context: Say we have a table emails that includes emails sent to and from zach@g.com:

Task: Write a query to get the response time per email (id) sent to zach@g.com . Do not include ids that did not receive a response from zach@g.com. Assume each email thread has a unique subject. Keep in mind a thread may have multiple responses back-and-forth between zach@g.com and another email address.

Solution:

SQL Window Function Practice Problems

#1: Get the ID with the highest value
Context: Say we have a table salaries with data on employee salary and department in the following format:

Task: Write a query to get the empno with the highest salary. Make sure your solution can handle ties!

#2: Average and rank with a window function (multi-part)

PART 1:
Context: Say we have a table salaries in the format:

Task: Write a query that returns the same table, but with a new column that has average salary per depname. We would expect a table in the form:

Solution:

PART 2:
Task: Write a query that adds a column with the rank of each employee based on their salary within their department, where the employee with the highest salary gets the rank of 1. We would expect a table in the form:

Solution:

 

Reference: 800 Data Science Questions & Answers doc by

800 Data Science Questions & Answers doc by Steve Nouri

Direct download here

What are Differences between Supervised and Unsupervised Learning?

Supervised UnSupervised
Input data is labelled Input data is unlabeled
Split in training/validation/test No split
Used for prediction Used for analysis
Classification and Regression Clustering, dimension reduction, and density estimation

Python Cheat Sheet

Python Beginners Cheat Sheet

Data Sciences Cheat Sheet

Data Sciences Cheat Sheet

Panda Cheat Sheet

Pandas Cheat Sheet

Learn SQL with Practical Exercises

SQL is definitely one of the most fundamental skills needed to be a data scientist.

This is a comprehensive handbook that can help you to learn SQL (Structured Query Language), which could be directly downloaded here

Credit: D Armstrong

Learn SQL with Practical_Exercises

Data Visualization: A comprehensive VIP Matplotlib Cheat sheet

A comprehensive VIP Matplotlib cheatsheet

Credit: Matplotlib

Download it here

Power BI for Intermediates

Power BI for Intermediates

Download it here

Credit: Soheil Bakhshi and Bruce Anderson

How to get a job in data science – a semi-harsh Q/A guide.

How to get a job in data science – a semi-harsh Q/A guide.

HOW DO I GET A JOB IN DATA SCIENCE?

Hey you. Yes you, person asking “how do I get a job in data science/analytics/MLE/AI whatever BS job with data in the title?”. I got news for you. There are two simple rules to getting one of these jobs.

Have experience.

Don’t have no experience.

There are approximately 1000 entry level candidates who think they’re qualified because they did a 24 week bootcamp for every entry level job. I don’t need to be a statistician to tell you your odds of landing one of these aren’t great.

HOW DO I GET EXPERIENCE?

Are you currently employed? If not, get a job. If you are, figure out a way to apply data science in your job, then put it on your resume. Mega bonus points here if you can figure out a way to attribute a dollar value to your contribution. Talk to your supervisor about career aspirations at year-end/mid-year reviews. Maybe you’ll find a way to transfer to a role internally and skip the whole resume ignoring phase. Alternatively, network. Be friends with people who are in the roles you want to be in, maybe they’ll help you find a job at their company.

WHY AM I NOT GETTING INTERVIEWS?

IDK. Maybe you don’t have the required experience. Maybe there are 500+ other people applying for the same position. Maybe your resume stinks. If you’re getting 1/20 response rate, you’re doing great. Quit whining.

IS XYZ DEGREE GOOD FOR DATA SCIENCE?

Does your degree involve some sort of non-remedial math higher than college algebra? Does your degree involve taking any sort of programming classes? If yes, congratulations, your degree will pass most base requirements for data science. Is it the best? Probably not, unless you’re CS or some really heavy math degree where half your classes are taught in Greek letters. Don’t come at me with those art history and underwater basket weaving degrees unless you have multiple years experience doing something else.

SHOULD I DO XYZ BOOTCAMP/MICROMASTERS?

Do you have experience? No? This ain’t gonna help you as much as you think it might. Are you experienced and want to learn more about how data science works? This could be helpful.

SHOULD I DO XYZ MASTER’S IN DATA SCIENCE PROGRAM?

Congratulations, doing a Master’s is usually a good idea and will help make you more competitive as a candidate. Should you shell out 100K for one when you can pay 10K for one online? Probably not. In all likelihood, you’re not gonna get $90K in marginal benefit from the more expensive program. Pick a known school (probably avoid really obscure schools, the name does count for a little) and you’ll be fine. Big bonus here if you can sucker your employer into paying for it.

WILL XYZ CERTIFICATE HELP MY RESUME?

Does your certificate say “AWS” or “AZURE” on it? If not, no.

DO I NEED TO KNOW XYZ MATH TOPIC?

Yes. Stop asking. Probably learn probability, be familiar with linear algebra, and understand what the hell a partial derivative is. Learn how to test hypotheses. Ultimately you need to know what the heck is going on math-wise in your predictions otherwise the company is going to go bankrupt and it will be all your fault.

WHAT IF I’M BAD AT MATH?

Do some studying or something. MIT opencourseware has a bunch of free recorded math classes. If you want to learn some Linear Algebra, Gilbert Strang is your guy.

WHAT PROGRAMMING LANGUAGES SHOULD I LEARN?

STOP ASKING THIS QUESTION. I CAN GOOGLE “HOW TO BE A DATA SCIENTIST” AND EVERY SINGLE GARBAGE TDS ARTICLE WILL TELL YOU SQL AND PYTHON/R. YOU’RE LUCKY YOU DON’T HAVE TO DEAL WITH THE JOY OF SEGMENTATION FAULTS TO RUN A SIMPLE LINEAR REGRESSION.

SHOULD I LEARN PYTHON OR R?

Both. Python is more widely used and tends to be more general purpose than R. R is better at statistics and data analysis, but is a bit more niche. Take your pick to start, but ultimately you’re gonna want to learn both you slacker.

SHOULD I MAKE A PORTFOLIO?

Yes. And don’t put some BS housing price regression, iris classification, or titanic survival project on it either. Next question.

WHAT SHOULD I DO AS A PROJECT?

IDK what are you interested in? If you say twitter sentiment stock market prediction go sit in the corner and think about what you just said. Every half brained first year student who can pip install sklearn and do model.fit() has tried unsuccessfully to predict the stock market. The efficient market hypothesis is a thing for a reason. There are literally millions of other free datasets out there you have one of the most powerful search engines at your fingertips to go find them. Pick something you’re interested in, find some data, and analyze it.

DO I NEED TO BE GOOD WITH PEOPLE? (courtesy of /u/bikeskata)

Yes! First, when you’re applying, no one wants to work with a weirdo. You should be able to have a basic conversation with people, and they shouldn’t come away from it thinking you’ll follow them home and wear their skin as a suit. Once you get a job, you’ll be interacting with colleagues, and you’ll need them to care about your analysis. Presumably, there are non-technical people making decisions you’ll need to bring in as well. If you can’t explain to a moderately intelligent person why they should care about the thing that took you 3 days (and cost $$$ in cloud computing costs), you probably won’t have your position for long. You don’t need to be the life of the party, but you should be pleasant to be around.

Credit: u/save_the_panda_bears

Top 75 Data Science Youtube channel

1- Alex The Analyst
2- Tina Huang
3- Abhishek Thakur
4- Michael Galarnyk
5- How to Get an Analytics Job
6- Ken Jee
7- Data Professor
8- Nicholas Renotte
9- KNN Clips
10- Ternary Data: Data Engineering Consulting
11- AI Basics with Mike
12- Matt Brattin
13- Chronic Coder
14- Intersnacktional
15- Jenny Tumay
16- Coding Professor
17- DataTalksClub
18- Ken’s Nearest Neighbors Podcast
19- Karolina Sowinska
20- Lander Analytics
21- Lights OnData
22- CodeEmporium
23- Andreas Mueller
24- Nate at StrataScratch
25- Kaggle
26- Data Interview Pro
27- Jordan Harrod
28- Leo Isikdogan
29- Jacob Amaral
30- Bukola
31- AndrewMoMoney
32- Andreas Kretz
33- Python Programmer
34- Machine Learning with Phil
35- Art of Visualization
36- Machine Learning University

Data Sciences – Top 400 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes

Data Sciences - Data Analytics
‎Djamgatech
‎Djamgatech
Developer: DjamgaTech Corp
Price: Free+
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot
  • ‎Djamgatech Screenshot

Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains.

In this blog, we are going to provide popular open source and public data sets, data visualization, data analytics and data lakes.

Researchers from IBM, MIT and Harvard Announced The Release Of DARPA “Common Sense AI” Dataset Along With Two Machine Learning Models At ICML 2021

Building machines that can make decisions based on common sense is no easy feat. A machine must be able to do more than merely find patterns in data; it also needs a way of interpreting the intentions and beliefs behind people’s choices.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

At the 2021 International Conference on Machine Learning (ICML), Researchers from IBM, MIT, and Harvard University have come together to release a DARPA “Common Sense AI” dataset for benchmarking AI intuition. They are also releasing two machine learning models that represent different approaches to the problem that relies on testing techniques psychologists use to study infants’ behavior to accelerate the development of AI exhibiting common sense. 

Source – Summary – Paper – IBM Blog

100 million protein structures Dataset by DeepMind

DeepMind creates ‘transformative’ map of human proteins drawn by AI. By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Here’s a good article about this topic

Google Dataset Search

Google Dataset Search

Malware traffic dataset

Comprises 1914081 records created from all malware traffic analysis .net PCAP files, from 2013 to 2021. The logs are generated using Suricata and Zeek.

Originator: ali_alwashali

Percent of “foreign-born” population in each US and EU state or country.

For the EU, “foreign-born” mean being born outside of any of the EU countries. For the US, “foreign-born” mean being born outside of any US state 🇺🇸🇪🇺

Author: Here

Percent of “foreign-born” population in each US and EU state or country. For the EU, “foreign-born” mean being born outside of any of the EU countries. For the US, “foreign-born” mean being born outside of any US state.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Examples of “foreign-born” in this context:

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Person born in Spain and living in France is NOT “foreign-born”

  • Person born in Turkey and living in France is “foreign-born”

  • Person born in Florida and living in Texas is NOT “foreign-born”

  • Person born in Mexico and living in Texas is “foreign-born”

  • Person born in Florida and living in France is “foreign-born”

  • Person born in France and living in Florida is “foreign-born”

🇺🇸🇪🇺🗺️

Note: Poland, Ireland, Germany, Greece, Cyprus, Malta, Portugal uses Eurostat 2010 Migration data and Croatia has no data at all

Link1

Link2

Link3

Tools: MS Office

Source: Here

35% of “entry-level” jobs on LinkedIn require 3+ years of experience

r/dataisbeautiful - [OC] 35% of "entry-level" jobs on LinkedIn require 3+ years of experience

Source: LinkedIn data  (see original post)

Tool: Photoshop from my colleague

 

Latest complete Netflix movie dataset

Created from 4 APIs. 11K+ rows and 30+ attributes of Netflix (Ratings, earnings, actors, language, availability, movie trailers, and many more)

Dataset on Kaggle.

Explore this dataset using FlixGem.com (this dataset is powering this webapp)

Dataset on Google Sheets.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Common Crawl

A corpus of web crawl data composed of over 50 billion web pages. The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata and text extractions.

AWS CLI Access (No AWS account required)

aws s3 ls s3://commoncrawl/ --no-sign-request

s3://commoncrawl/crawl-data/CC-MAIN-2021-17 – April 2021

 Dataset on protein prices

Data on Primary Commodity Prices are updated monthly based on the IMF’s Primary Commodity Price System.

Excel Database

 CPOST dataset on suicide attacks over four decades

The University of Chicago Project on Security and Threats presents the updated and expanded Database on Suicide Attacks (DSAT), which now links to Uppsala Conflict Data Program data on armed conflicts and includes a new dataset measuring the alliance and rivalry relationships among militant groups with connections to suicide attack groups. Access it here.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Credit Card Dataset – Survey of Consumer Finances (SCF) Combined Extract Data 1989-2019

 You can do a lot of aggregated analysis in a pretty straightforward way there.

Drone imagery with annotations for small object detection and tracking dataset

11 TB dataset of drone imagery with annotations for small object detection and tracking

Download and more information are available here

Dataset License: CDLA-Sharing-1.0

Helper scripts for accessing the dataset: DATASET.md

Dataset Exploration: Colab

NOAA High-Resolution Rapid Refresh (HRRR) Model

The HRRR is a NOAA real-time 3-km resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model, initialized by 3km grids with 3km radar assimilation. Radar data is assimilated in the HRRR every 15 min over a 1-h period adding further detail to that provided by the hourly data assimilation from the 13km radar-enhanced Rapid Refresh.

Registry of Open Data on AWS

This registry exists to help people discover and share datasets that are available via AWS resources. Learn more about sharing data on AWS.

See all usage examples for datasets listed in this registry.

See datasets from Digital Earth AfricaFacebook Data for GoodNASA Space Act AgreementNIH STRIDESNOAA Big Data ProgramSpace Telescope Science Institute, and Amazon Sustainability Data Initiative.

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

Textbook Question Answering (TQA)

1,076 textbook lessons, 26,260 questions, 6229 images

Documentation: allenai.org/data/tqa

Download

Harmonized Cancer Datasets: Genomic Data Commons Data Portal

The GDC Data Portal is a robust data-driven platform that allows cancer
researchers and bioinformaticians to search and download cancer data for analysis.

Genomic Data Commons Data Portal
Genomic Data Commons Data Portal

The Cancer Genome Atlas

The Cancer Genome Atlas (TCGA), a collaboration between the National Cancer Institute (NCI) and National Human Genome Research Institute (NHGRI), aims to generate comprehensive, multi-dimensional maps of the key genomic changes in major types and subtypes of cancer.

AWS CLI Access (No AWS account required)

aws s3 ls s3://tcga-2-open/ --no-sign-request

Therapeutically Applicable Research to Generate Effective Treatments (TARGET)

The Therapeutically Applicable Research to Generate Effective Treatments (TARGET) program applies a comprehensive genomic approach to determine molecular changes that drive childhood cancers. The goal of the program is to use data to guide the development of effective, less toxic therapies. TARGET is organized into a collaborative network of disease-specific project teams.  TARGET projects provide comprehensive molecular characterization to determine the genetic changes that drive the initiation and progression of childhood cancers. The dataset contains open Clinical Supplement, Biospecimen Supplement, RNA-Seq Gene Expression Quantification, miRNA-Seq Isoform Expression Quantification, miRNA-Seq miRNA Expression Quantification data from Genomic Data Commons (GDC), and open data from GDC Legacy Archive. Access it here.

Genome Aggregation Database (gnomAD)

The Genome Aggregation Database (gnomAD) is a resource developed by an international coalition of investigators that aggregates and harmonizes both exome and genome data from a wide range of large-scale human sequencing projects. The summary data provided here are released for the benefit of the wider scientific community without restriction on use. Downloads

SQuAD (Stanford Question Answering Dataset)

Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Access it here.

PubMed Diabetes Dataset

The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details.

Download Link

Drug-Target Interaction Dataset

This dataset contains interactions between drugs and targets collected from DrugBank, KEGG Drug, DCDB, and Matador. It was originally collected by Perlman et al. It contains 315 drugs, 250 targets, 1,306 drug-target interactions, 5 types of drug-drug similarities, and 3 types of target-target similarities. Drug-drug similarities include Chemical-based, Ligand-based, Expression-based, Side-effect-based, and Annotation-based similarities. Target-target similarities include Sequence-based, Protein-protein interaction network-based, and Gene Ontology-based similarities. The original task on the dataset is to predict new interactions between drugs and targets based on different types of similarities in the network. Download link

Pharmacogenomics Datasets

PharmGKB data and knowledge is available as downloads. It is often critical to check with their curators at feedback@pharmgkb.org before embarking on a large project using these data, to be sure that the files and data they make available are being interpreted correctly. PharmGKB generally does NOT need to be a co-author on such analyses; They just want to make sure that there is a correct understanding of our data before lots of resources are spent.

Pancreatic Cancer Organoid Profiling

The dataset contains open RNA-Seq Gene Expression Quantification data and controlled WGS/WXS/RNA-Seq Aligned Reads, WXS Annotated Somatic Mutation, WXS Raw Somatic Mutation, and RNA-Seq Splice Junction Quantification. Documentation

AWS CLI Access (No AWS account required)

aws s3 ls s3://gdc-organoid-pancreatic-phs001611-2-open/ --no-sign-request

Africa Soil Information Service (AfSIS) Soil Chemistry

This dataset contains soil infrared spectral data and paired soil property reference measurements for georeferenced soil samples that were collected through the Africa Soil Information Service (AfSIS) project, which lasted from 2009 through 2018. Documentation

AWS CLI Access (No AWS account required)

aws s3 ls s3://afsis/ --no-sign-request

Dataset for Affective States in E-Environments

DAiSEE is the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration “in the wild”. The dataset has four levels of labels namely – very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. Download it here.

NatureServe Explorer Dataset

NatureServe Explorer provides conservation status, taxonomy, distribution, and life history information for more than 95,000 plants and animals in the United States and Canada, and more than 10,000 vegetation communities and ecological systems in the Western Hemisphere.

The data available through NatureServe Explorer represents data managed in the NatureServe Central Databases. These databases are dynamic, being continually enhanced and refined through the input of hundreds of natural heritage program scientists and other collaborators. NatureServe Explorer is updated from these central databases to reflect information from new field surveys, the latest taxonomic treatments and other scientific publications, and new conservation status assessments. Explore Data here

Flight Records in the US

Airline On-Time Performance and Causes of Flight Delays – On_Time Data.

This database contains scheduled and actual departure and arrival times, reason of delay. reported by certified U.S. air carriers that account for at least one percent of domestic scheduled passenger revenues. The data is collected by the Office of Airline Information, Bureau of Transportation Statistics (BTS).

Djamgatech
Djamgatech
Developer: Djamgatech Corp
Price: Free+
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot
  • Djamgatech Screenshot

FlightAware.com has data but you need to pay for a full dataset.

The anyflights package supplies a set of functions to generate air travel data (and data packages!) similar to nycflights13. With a user-defined year and airport, the anyflights function will grab data on:

  • flights: all flights that departed a given airport in a given year and month
  • weather: hourly meterological data for a given airport in a given year and month
  • airports: airport names, FAA codes, and locations
  • airlines: translation between two letter carrier (airline) codes and names
  • planes: construction information about each plane found in flights

Airline On-Time Statistics and Delay Causes

The U.S. Department of Transportation’s (DOT) Bureau of Transportation Statistics (BTS) tracks the on-time performance of domestic flights operated by large air carriers. Summary information on the number of on-time, delayed, canceled and diverted flights appears in DOT’s monthly Air Travel Consumer Report, published about 30 days after the month’s end, as well as in summary tables posted on this website. BTS began collecting details on the causes of flight delays in June 2003. Summary statistics and raw data are made available to the public at the time the Air Travel Consumer Report is released. Access it here

Worldwide flight data

Open flights: As of January 2017, the OpenFlights Airports Database contains over 10,000 airports, train stations and ferry terminals spanning the globe

Download: airports.dat (Airports only, high quality)

Download: airports-extended.dat (Airports, train stations and ferry terminals, including user contributions)

Bureau of Transportation:

Flightera.net seems to have a lot of good data for free. It has in-depth data on flights and doesn’t seem limited by date. I can’t speak on the validity of the data though.

flightradar24.com has lots of data, also historically, they might be willing to help you get it in a nice format.

 

2019 Crime statistics in the USA

Dataset with arrest in US by race and separate states. Download Excel here

Yahoo Answers DataSets

Yahoo is shutting down in 2021. This is Yahoo Answers datasets (300MB gzip) that is fairly extensive from 2015 with about 1.4m rows. This dataset has the best questions answers, I mean all the answers, including the most insane awful answers and the worst questions people put together. Download it here.

Another option here: According to the tracker, there are 77M done, 20M out(?), and 40M to go:

Yahoo Answer on Wikipedia

History of America 1400-2021

Sources:

os-connect.com

ourworldindata.org/

GGDC

GlobalFirePower

Persian words phonetics dataset

This is a dataset of about 55K Persian words with their phonetics. Each word is in a line and separated from its phonetic by a tab. Download it here

Historical Air Quality Dataset

Air Quality Data Collected at Outdoor Monitors Across the US. This is a BigQuery Dataset. There are no files to download, but you can query it through Kernels using the BigQuery API. The AQS Data Mart is a database containing all of the information from AQS. It has every measured value the EPA has collected via the national ambient air monitoring program. It also includes the associated aggregate values calculated by EPA (8-hour, daily, annual, etc.). The AQS Data Mart is a copy of AQS made once per week and made accessible to the public through web-based applications. The intended users of the Data Mart are air quality data analysts in the regulatory, academic, and health research communities. It is intended for those who need to download large volumes of detailed technical data stored at EPA and does not provide any interactive analytical tools. It serves as the back-end database for several Agency interactive tools that could not fully function without it: AirData, AirCompare, The Remote Sensing Information Gateway, the Map Monitoring Sites KML page, etc.

Stack Exchange Dataset

Stack Exchange

Awesome Public Datasets

This list of a topic-centric public data sources in high quality. They are collected and tidied from blogs, answers, and user responses. Most of the data sets listed below are free, however, some are not.

Agriculture Dataset

Biology Dataset

Climate and Weather Dataset

Complex Network Dataset

Computer Network Dataset

CyberSecurity Dataset

Data Challenges Dataset

Earth Science Dataset

Economics Dataset

Education Dataset

Energy Dataset

Entertainment Dataset

Finance Dataset

GIS Dataset

Government Dataset

Healthcare Dataset

Image Processing Dataset

Machine Learning Dataset

Museums Dataset

Natural Language Dataset

Neuroscience Dataset

Physics Dataset

Prostate Cancer Dataset

Psychology and Cognition Dataset

Public Domains Dataset

Search Engines Dataset

Social Networks Dataset

Social Sciences Dataset

Software Dataset

Sports Dataset

Time Series Dataset

Transportation Dataset

eSports Dataset

Complementary Collections

Categorized list of public datasets: Sindre Sorhus /awesome List

Platforms

  • Node.js – Async non-blocking event-driven JavaScript runtime built on Chrome’s V8 JavaScript engine.
  • Frontend Development
  • iOS – Mobile operating system for Apple phones and tablets.
  • Android – Mobile operating system developed by Google.
  • IoT & Hybrid Apps
  • Electron – Cross-platform native desktop apps using JavaScript/HTML/CSS.
  • Cordova – JavaScript API for hybrid apps.
  • React Native – JavaScript framework for writing natively rendering mobile apps for iOS and Android.
  • Xamarin – Mobile app development IDE, testing, and distribution.
  • Linux
    • Containers
    • eBPF – Virtual machine that allows you to write more efficient and powerful tracing and monitoring for Linux systems.
    • Arch-based Projects – Linux distributions and projects based on Arch Linux.
  • macOS – Operating system for Apple’s Mac computers.
  • watchOS – Operating system for the Apple Watch.
  • JVM
  • Salesforce
  • Amazon Web Services
  • Windows
  • IPFS – P2P hypermedia protocol.
  • Fuse – Mobile development tools.
  • Heroku – Cloud platform as a service.
  • Raspberry Pi – Credit card-sized computer aimed at teaching kids programming, but capable of a lot more.
  • Qt – Cross-platform GUI app framework.
  • WebExtensions – Cross-browser extension system.
  • RubyMotion – Write cross-platform native apps for iOS, Android, macOS, tvOS, and watchOS in Ruby.
  • Smart TV – Create apps for different TV platforms.
  • GNOME – Simple and distraction-free desktop environment for Linux.
  • KDE – A free software community dedicated to creating an open and user-friendly computing experience.
  • .NET
    • Core
    • Roslyn – Open-source compilers and code analysis APIs for C# and VB.NET languages.
  • Amazon Alexa – Virtual home assistant.
  • DigitalOcean – Cloud computing platform designed for developers.
  • Flutter – Google’s mobile SDK for building native iOS and Android apps from a single codebase written in Dart.
  • Home Assistant – Open source home automation that puts local control and privacy first.
  • IBM Cloud – Cloud platform for developers and companies.
  • Firebase – App development platform built on Google Cloud Platform.
  • Robot Operating System 2.0 – Set of software libraries and tools that help you build robot apps.
  • Adafruit IO – Visualize and store data from any device.
  • Cloudflare – CDN, DNS, DDoS protection, and security for your site.
  • Actions on Google – Developer platform for Google Assistant.
  • ESP – Low-cost microcontrollers with WiFi and broad IoT applications.
  • Deno – A secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
  • DOS – Operating system for x86-based personal computers that was popular during the 1980s and early 1990s.
  • Nix – Package manager for Linux and other Unix systems that makes package management reliable and reproducible.

Programming Languages

  • JavaScript
  • Swift – Apple’s compiled programming language that is secure, modern, programmer-friendly, and fast.
  • Python – General-purpose programming language designed for readability.
    • Asyncio – Asynchronous I/O in Python 3.
    • Scientific Audio – Scientific research in audio/music.
    • CircuitPython – A version of Python for microcontrollers.
    • Data Science – Data analysis and machine learning.
    • Typing – Optional static typing for Python.
    • MicroPython – A lean and efficient implementation of Python 3 for microcontrollers.
  • Rust
  • Haskell
  • PureScript
  • Go
  • Scala
    • Scala Native – Optimizing ahead-of-time compiler for Scala based on LLVM.
  • Ruby
  • Clojure
  • ClojureScript
  • Elixir
  • Elm
  • Erlang
  • Julia – High-level dynamic programming language designed to address the needs of high-performance numerical analysis and computational science.
  • Lua
  • C
  • C/C++ – General-purpose language with a bias toward system programming and embedded, resource-constrained software.
  • R – Functional programming language and environment for statistical computing and graphics.
  • D
  • Common Lisp – Powerful dynamic multiparadigm language that facilitates iterative and interactive development.
  • Perl
  • Groovy
  • Dart
  • Java – Popular secure object-oriented language designed for flexibility to “write once, run anywhere”.
  • Kotlin
  • OCaml
  • ColdFusion
  • Fortran
  • PHP – Server-side scripting language.
  • Pascal
  • AutoHotkey
  • AutoIt
  • Crystal
  • Frege – Haskell for the JVM.
  • CMake – Build, test, and package software.
  • ActionScript 3 – Object-oriented language targeting Adobe AIR.
  • Eta – Functional programming language for the JVM.
  • Idris – General purpose pure functional programming language with dependent types influenced by Haskell and ML.
  • Ada/SPARK – Modern programming language designed for large, long-lived apps where reliability and efficiency are essential.
  • Q# – Domain-specific programming language used for expressing quantum algorithms.
  • Imba – Programming language inspired by Ruby and Python and compiles to performant JavaScript.
  • Vala – Programming language designed to take full advantage of the GLib and GNOME ecosystems, while preserving the speed of C code.
  • Coq – Formal language and environment for programming and specification which facilitates interactive development of machine-checked proofs.
  • V – Simple, fast, safe, compiled language for developing maintainable software.

Front-End Development

Back-End Development

  • Flask – Python framework.
  • Docker
  • Vagrant – Automation virtual machine environment.
  • Pyramid – Python framework.
  • Play1 Framework
  • CakePHP – PHP framework.
  • Symfony – PHP framework.
  • Laravel – PHP framework.
    • Education
    • TALL Stack – Full-stack development solution featuring libraries built by the Laravel community.
  • Rails – Web app framework for Ruby.
  • Phalcon – PHP framework.
  • Useful .htaccess Snippets
  • nginx – Web server.
  • Dropwizard – Java framework.
  • Kubernetes – Open-source platform that automates Linux container operations.
  • Lumen – PHP micro-framework.
  • Serverless Framework – Serverless computing and serverless architectures.
  • Apache Wicket – Java web app framework.
  • Vert.x – Toolkit for building reactive apps on the JVM.
  • Terraform – Tool for building, changing, and versioning infrastructure.
  • Vapor – Server-side development in Swift.
  • Dash – Python web app framework.
  • FastAPI – Python web app framework.
  • CDK – Open-source software development framework for defining cloud infrastructure in code.
  • IAM – User accounts, authentication and authorization.
  • Chalice – Python framework for serverless app development on AWS Lambda.

Computer Science

Big Data

  • Big Data
  • Public Datasets
  • Hadoop – Framework for distributed storage and processing of very large data sets.
  • Data Engineering
  • Streaming
  • Apache Spark – Unified engine for large-scale data processing.
  • Qlik – Business intelligence platform for data visualization, analytics, and reporting apps.
  • Splunk – Platform for searching, monitoring, and analyzing structured and unstructured machine-generated big data in real-time.

Theory

Books

Editors

Gaming

Development Environment

Entertainment

Databases

  • Database
  • MySQL
  • SQLAlchemy
  • InfluxDB
  • Neo4j
  • MongoDB – NoSQL database.
  • RethinkDB
  • TinkerPop – Graph computing framework.
  • PostgreSQL – Object-relational database.
  • CouchDB – Document-oriented NoSQL database.
  • HBase – Distributed, scalable, big data store.
  • NoSQL Guides – Help on using non-relational, distributed, open-source, and horizontally scalable databases.
  • Contexture – Abstracts queries/filters and results/aggregations from different backing data stores like ElasticSearch and MongoDB.
  • Database Tools – Everything that makes working with databases easier.
  • Grakn – Logical database to organize large and complex networks of data as one body of knowledge.

Media

Learn

Security

Content Management Systems

  • Umbraco
  • Refinery CMS – Ruby on Rails CMS.
  • Wagtail – Django CMS focused on flexibility and user experience.
  • Textpattern – Lightweight PHP-based CMS.
  • Drupal – Extensible PHP-based CMS.
  • Craft CMS – Content-first CMS.
  • Sitecore – .NET digital marketing platform that combines CMS with tools for managing multiple websites.
  • Silverstripe CMS – PHP MVC framework that serves as a classic or headless CMS.

Hardware

Business

Work

Networking

Decentralized Systems

  • Bitcoin – Bitcoin services and tools for software developers.
  • Ripple – Open source distributed settlement network.
  • Non-Financial Blockchain – Non-financial blockchain applications.
  • Mastodon – Open source decentralized microblogging network.
  • Ethereum – Distributed computing platform for smart contract development.
  • Blockchain AI – Blockchain projects for artificial intelligence and machine learning.
  • EOSIO – A decentralized operating system supporting industrial-scale apps.
  • Corda – Open source blockchain platform designed for business.
  • Waves – Open source blockchain platform and development toolset for Web 3.0 apps and decentralized solutions.
  • Substrate – Framework for writing scalable, upgradeable blockchains in Rust.

Higher Education

  • Computational Neuroscience – A multidisciplinary science which uses computational approaches to study the nervous system.
  • Digital History – Computer-aided scientific investigation of history.
  • Scientific Writing – Distraction-free scientific writing with Markdown, reStructuredText and Jupyter notebooks.

Events

Testing

  • Testing – Software testing.
  • Visual Regression Testing – Ensures changes did not break the functionality or style.
  • Selenium – Open-source browser automation framework and ecosystem.
  • Appium – Test automation tool for apps.
  • TAP – Test Anything Protocol.
  • JMeter – Load testing and performance measurement tool.
  • k6 – Open-source, developer-centric performance monitoring and load testing solution.
  • Playwright – Node.js library to automate Chromium, Firefox and WebKit with a single API.
  • Quality Assurance Roadmap – How to start & build a career in software testing.

Miscellaneous

Related

US Department of Education CRDC Dataset

The US Department of Ed has a dataset called the CRDC that collects data from all the public schools in the US and has demographic, academic, financial and all sorts of other fun data points. They also have corollary datasets that use the same identifier—an expansion pack if you may. It comes out every 2-3 years. Access it here

Nasa Dataset: sequencing data from bacteria before and after being taken to space

NASA has some sequencing data from bacteria before and after being taken to space, to look at genetic differences caused by lack of gravity, radiation and others. Very fun if you want to try your hand at some bio data science. Access it here.

All Trump’s twitter insults from 2015 to 2021 in CSV.

Extracted from the NYT story: here

Data is plural

Data is Plural is a really good newsletter published by Jeremy Singer-Vine. The datasets are very random, but super interesting. Access it here.

Global terrorism database

 Huge list of terrorism incidents from inside the US and abroad. Each entry has date and location of the incident, motivations, whether people or property were lost, the size of the attack, type of attack, etc. Access it here

Terrorist Attacks Dataset: This dataset consists of 1293 terrorist attacks each assigned one of 6 labels indicating the type of the attack. Each attack is described by a 0/1-valued vector of attributes whose entries indicate the absence/presence of a feature. There are a total of 106 distinct features. The files in the dataset can be used to create two distinct graphs. The README file in the dataset provides more details. Download Link:

Terrorists: This dataset contains information about terrorists and their relationships. This dataset was designed for classification experiments aimed at classifying the relationships among terrorists. The dataset contains 851 relationships, each described by a 0/1-valued vector of attributes where each entry indicates the absence/presence of a feature. There are a total of 1224 distinct features. Each relationship can be assigned one or more labels out of a maximum of four labels making this dataset suitable for multi-label classification tasks. The README file provides more details. Download Link

The dolphin social network

This network dataset is in the category of Social Networks. A social network of bottlenose dolphins. The dataset contains a list of all of links, where a link represents frequent associations between dolphins. Access it here

Dataset of 200,000 jokes

There are about 208 000 jokes in this database scraped from three sources.

Access it here:

The Million Song Dataset

The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks.

Its purposes are:

  • To encourage research on algorithms that scale to commercial sizes
  • To provide a reference dataset for evaluating research
  • As a shortcut alternative to creating a large dataset with APIs (e.g. The Echo Nest’s)
  • To help new researchers get started in the MIR field

Cornell University’s eBird dataset

Decades of observations of birds all around the world, truly an impressive way to leverage citizen science. Access it here.

UFO Report Dataset

NUFORC geolocated and time standardized ufo reports for close to a century of data. 80,000 plus reports. Access it here

CDC’s Trend Drug Data

The CDC has a public database called NAMCS/NHAMCS that allows you to trend drug data. It has a lot of other data points so it can be used for a variety of other reasons. Access it here.

Health and Retirement study: Public Survey data

A listing of publicly available biennial, off-year, and cross-year data products.

Example: COVID-19 Data

YearProduct
20202020 HRS COVID-19 Project

RAND HRS Data

HRS data products produced by the RAND Center for the Study of Aging.

Gateway Harmonized Data

HRS data products produced by the USC Program on Global Aging, Health, and Policy.

Contributed and Replication Data

Data products (unsupported by the HRS) provided by researchers sharing their work.

Restricted/Sensitive Data

Cognition Data

A summary of HRS cognition data, including the new Harmonized Cognition Assessment Protocol (HCAP.)

Biomarker and Health Data

Sensitive health data files available are from the public data portal after a supplemental agreement is signed.

Restricted Data

HRS restricted data files require a detailed application process, and are available only through remote virtual desktop or encrypted physical media.

Administrative Linkages

Links HRS data with Medicare and Social Security.

Genetic Data

Genetic data products derived from 20,000 genotyped HRS respondents.

The Quick Draw Dataset

The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. Access it here.

Air Quality Dataset

The AirNow API replaces the previous AirNow Gateway web services. It includes file outputs and RSS data feeds. AirNow Gateway users can use their existing login information to access the new AirNow API web pages and web services. Access to the AirNow API is generally available to the public, and new accounts can be acquired via the Log In page

UK Water Industry Chemical Investigations dataset

Search and extract the measurements from 600 Wastewater Treatment Sites owned and operated by UK Water Companies and part of the Chemical Investigations Programme (CIP2).

M3 and M4 Dataset Time Series Data

The 3003 time series of the M3-Competition.

The M4 competition which is a continuation of the Makridakis Competitions for forecasting and was conducted in 2018. This competion includes the prediction of both Point Forecasts and Prediction Intervals.

Protein Data Bank (PDB)

Used by Google’s deep-learning program for determining the 3D shapes of proteins stands to transform biology, say scientists. Access it here.

Dataset of Games

In computer science, Artificial Intelligence (AI) is intelligence demonstrated by machines. Its definition, AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that achieving its goals Russell et. al (2016).

Withal, Data Mining (DM) is the process of discovering patterns in data sets (or datasets) involving methods of machine learning, statistics, and database systems; DM focus on extract the information of datasets Han (2011).

This repository serves as a guide for anyone who wants to work with Artificial Intelligence or Data Mining applied in digital games! Here you will find a series of datasets, tools and materials available to build your application or dataset. Access it here.

DonorsChoose.org Application Screening DataSet

Help Predict whether teachers’ project proposals are accepted

Dataset of all the squirrels in Central Park

The Squirrel Census is a multimedia science, design, and storytelling project focusing on the Eastern gray (Sciurus carolinensis). They count squirrels and present their findings to the public.

Google BigQuery Public Datasets

BigQuery public datasets are made available without any restrictions to all Google Cloud users. Google pays for the storage of these datasets. You can use them to learn how to work with BigQuery or even build your application on top of them, exactly as we’re going to do.

IMDb Dataset

IMDb dataset importer – loads into a Marten DB document store. It imports the public datasets into a database, and provides repositories for querying. The total imported size is about 40 million rows, and 14 gigabytes on disk!

PHOnA: A Public Dataset of Measured Headphone Transfer Functions

A dataset of measured headphone transfer functions (HpTFs), the Princeton Headphone Open Archive (PHOnA), is presented. Extensive studies of HpTFs have been conducted for the past twenty years, each requiring a separate set of measurements, but this data has not yet been publicly shared. PHOnA aggregates HpTFs from different laboratories, including measurements for multiple different headphones, subjects, and repositionings of headphones for each subject. The dataset uses the spatially oriented format for acoustics (SOFA), and SOFA conventions are proposed for efficiently storing HpTFs. PHOnA is intended to provide a foundation for machine learning techniques applied to HpTF equalization. This shared data will allow optimization of equalization algorithms to provide more universal solutions to perceptually transparent headphone reproduction. Access it here.

Sports Data Set

Provide both basic and sabermetric statistics and resources for sports fans everywhere. Access here

Kaggle DataSets

Explore, analyze, and share quality data here

Coronavirus Datasets

Spreadsheets and Datasets:

Natural History Museum in London

The Natural History Museum in London has 80 million items (and counting!) in its collections, from the tiniest specks of stardust to the largest animal that ever lived – the blue whale. 

The Digital Collections Programme is a project to digitise these specimens and give the global scientific community access to unrivalled historical, geographic and taxonomic specimen data gathered in the last 250 years. Mobilising this data can facilitate research into some of the most pressing scientific and societal challenges.

Digitising involves creating a digital record of a specimen which can consist of all types of information such as images, and geographical and historical information about where and when a specimen was collected. The possibilities for digitisation are quite literally limitless – as technology evolves, so do possible uses and analyses of the collections. We are currently exploring how machine learning and automation can help us capture information from specimen images and their labels.

With such a wide variety of specimens, digitising looks different for every single collection. How we digitise a fly specimen on a microscope slide is very different to how we might digitise a bat in a spirit jar! We develop new workflows in response to the type of specimens we are dealing with. Sometimes we have to get really creative, and have even published on workflows which have involved using pieces of LEGO to hold specimens in place while we are imaging them.

Mobilising this data and making it open access is at the heart of the project. All of the specimen data is released on our Data Portal, and we also feed the data into international databases such as GBIF.

TSA Throughput Dataset (alternate source)

The TSA has is publishing more and more data via it’s Freedom of Information Act (FOIA) Reading Room.  This project on gith tsathroughput  contains the source for extracting the information from the .PDF files and converts them to JSON and CSV files.

The /data folder contains the source .PDFs going back to 2018 while the /data/raw/tsa/throughput folder contains .json files.

Data Planet

The largest repository of standardized and structured statistical data

statisticaldatasets.data-planet.com/

Chess datasets

3.5 Million Chess Games

ML Dataset to practice methods of regression

Center for Machine Learning and Intelligent Systems

585 Data Sets

 

ManyTypes4Py: A benchmark Python Dataset for Machine Learning-Based Type Inference

  • The dataset is gathered on Sep. 17th 2020 from GitHub.
  • It has more than 5.2K Python repositories and 4.2M type annotations.
  • Use it to train  ML-based type inference model for Python
  • Access it here

Quadrature magnetoresistance in overdoped cuprates

Measurements of the normal (i.e. non-superconducting) state magnetoresistance (change in resistance with magnetic field) in several single crystalline samples of copper-oxide high-temperature superconductors. The measurements were performed predominantly at the High Field Magnet Laboratory (HFML) in Nijmegen, the Netherlands, and the Pulsed Magnetic Field Facility (LNCMI-T) in Toulouse, France. Complete Zip Download

The UMA-SAR Dataset: Multimodal data collection from a ground vehicle during outdoor disaster response training exercises

Collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. Full Dataset.

Child Mortality from Malaria

Child mortality numbers caused by malaria by country

Number of deaths of infants, neonatal, and children up to 4 years old caused by malaria by country from 2000 to 2015. Originator: World Health Organization

Child-Mortality-Numbers-by-Malaria-2015

Quora Question Pairs at Data.world

The dataset  will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair. Access it here.

MIMIC Critical Care Database

MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising deidentified health data associated with ~60,000 intensive care unit admissions. It includes demographics, vital signs, laboratory tests, medications, and more. Access it here.

Data.Gov: The home of the U.S. Government’s open data

Here you will find data, tools, and resources to conduct research, develop web and mobile applications, design data visualizations, and more. Search over 280000 Datasets.

Tidy Tuesday Dataset

TidyTuesday is built around open datasets that are found in the “wild” or submitted as Issues on our GitHub.

US Census Bureau: QuickFacts Dataset

QuickFacts provides statistics for all states and counties, and for cities and towns with a population of 5,000 or more.

Classical Abstract Art Dataset

Art that does not attempt to represent an accurate depiction of a visual reality but instead use shapes, colours, forms and gestural marks to achieve its effect

5000+ classical abstract art here, real artists with annotation. You can download them in very high resolution,  however you would have to crawl them first  with this scraper.

Interactive map of indigenous people around the world

Native-Land.ca is a website run by the nonprofit organization Native Land Digital. Access it here.

Data Visualization: A Wordcloud for each of the Six Largest Religions and their Religious Texts (Judaism, Christianity, and Islam; Hinduism, Buddhism, and Sikhism)

Highest altitude humans have been each year since 1961

DataOhio

Over 200+ public datasets, including COVID data. Access it here.

Ohio Data, Ohio Insights. The DataOhio catalog is a single source for the most critical and relevant datasets from state agencies and entities.

data.ohio.gov/wps/

National Household Travel Survey (US)

Conducted by the Federal Highway Administration (FHWA), the NHTS is the authoritative source on the travel behavior of the American public. It is the only source of national data that allows one to analyze trends in personal and household travel. It includes daily non-commercial travel by all modes, including characteristics of the people traveling, their household, and their vehicles. Access it here.

National Travel Survey (UK)

Statistics and data about the National Travel Survey, based on a household survey to monitor trends in personal travel.

The survey collects information on how, why, when and where people travel as well as factors affecting travel (e.g. car availability and driving license holding).

National Travel Survey data tables UK
National Travel Survey data tables UK

National Travel Survey (NTS)[Canada]

Monthly Railway Carloadings: Interactive Dashboard
Monthly Railway Carloadings: Interactive Dashboard

ENTUR: NeTEx or GTFS datasets [Norway]

NeTEx is the official format for public transport data in Norway and is the most complete in terms of available data. GTFS is a downstream format with only a limited subset of the total data, but we generate datasets for it anyway since GTFS can be easier to use and has a wider distribution among international public transport solutions. GTFS sets come in “extended” and “basic” versions. Access here.

The Swedish National Forest Inventory

A subset of the field data collected on temporary NFI plots can be downloaded in Excel format from this web site. The file includes a Read_me sheet and a sheet with field data from temporary plots on forest land1 collected from 2007 to 2019. Note that plots located on boundaries (for example boundaries between forest stands, or different land use classes) are not included in the dataset. The dataset is primarily intended to be used as reference data and validation data in remote sensing applications. It cannot be used to derive estimates of totals or mean values for a geographic area of any size. Download the dataset here

Large data sets from finance and economics applicable in related fields studying the human condition

World Bank Data: Countries Data | Topics Data | Indicators Data | Catalog

US Federal Statistics

Boards of Governors of the Federal Reserve: Data Download Program

CIA: The world Factbook provides basic intelligence on the history, people, government, economy, energy, geography, environment, communications, transportation, military, terrorism, and transnational issues for 266 world entities.

Human Development Report: United Nations Development Programme – Public Data Explorer

Consumer Price Index: The Consumer Price Index (CPI) is a measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services. Indexes are available for the U.S. and various geographic areas. Average price data for select utility, automotive fuel, and food items are also available.

Gapminder.org: Unveiling the beauty of statistics for a fact based world view Watch everyday life in hundreds of homes on all income levels across the world, to counteract the media’s skewed selection of images of other places.

Our world in Data: International Trade

Research and data to make progress against the world’s largest problems: 3139 charts across 297 topics, All free: open access and open source.

International Historical Statistics (by Brian Mitchell)

 
International Historical Statistics is a compendium of national and international socio-economic data from 1750 to 2010. Data are available in both Excel and PDF tabular formats. IHS is structured in three broad geographical divisions and ten themes: Africa / Asia / Oceania; The Americas and Europe. The database is structured in ten categories: Population and vital statistics; Labour force; Agriculture; Industry; External trade; Transport and communications; Finance; Commodity prices; Education and National accounts. Access here

World Input-Output Database

World Input-Output Tables and underlying data. World Input-Output Tables and underlying data, covering 43 countries, and a model for the rest of the world for the period 2000-2014. Data for 56 sectors are classified according to the International Standard Industrial Classification revision 4 (ISIC Rev. 4).

  • Data: Real and PPP-adjusted GDP in US millions of dollars, national accounts (household consumption, investment, government consumption, exports and imports), exchange rates and population figures.
  • Geographical coverage: Countries around the world
  • Time span: from 1950-2011 (version 8.1)
  • Available at: Online

Correlates of War Bilateral Trade

COW seeks to facilitate the collection, dissemination, and use of accurate and reliable quantitative data in international relations. Key principles of the project include a commitment to standard scientific principles of replication, data reliability, documentation, review, and the transparency of data collection procedures

  • Data: Total national trade and bilateral trade flows between states. Total imports and exports of each country in current US millions of dollars and bilateral flows in current US millions of dollars
  • Geographical coverage: Single countries around the world
  • Time span: from 1870-2009
  • Available at: Online here
  • This data set is hosted by Katherine Barbieri, University of South Carolina, and Omar Keshk, Ohio State University.

World Bank Open Data – World Development Indicators

Free and open access to global development data. Access it here.

World Trade Organization – WTO

The WTO provides quantitative information in relation to economic and trade policy issues. Its data-bases and publications provide access to data on trade flows, tariffs, non-tariff measures (NTMs) and trade in value added.

  • Data: Many series on tariffs and trade flows
  • Geographical coverage: Countries around the world
  • Time span: Since 1948 for some series
  • Available at: Online here
WTO - World Trade Organization
WTO – World Trade Organization

SMOKA Science Archive

The Subaru-Mitaka-Okayama-Kiso Archive, holds about 15 TB of astronomical data from facilities run by the National Astronomical Observatory of Japan. All data becomes publicly available after an embargo period of 12-24 months (to give the original observers time to publish their papers).

Graph Datasets

Multi-Domain Sentiment Dataset

The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains). Some domains (books and dvds) have hundreds of thousands of reviews. Others (musical instruments) have only a few hundred. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed. Access it here.

A Global Database of Society

Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, themes, sources, emotions, counts, quotes, images and events driving our global society every second of every day, creating a free open platform for computing on the entire world.

The Yahoo News Feed: Ratings and Classification Data

Dataset is 1.5 TB compressed, 13.5 TB uncompressed

Yahoo! Music User Ratings of Musical Artists, version 1.0 (423 MB)

This dataset represents a snapshot of the Yahoo! Music community’s preferences for various musical artists. The dataset contains over ten million ratings of musical artists given by Yahoo! Music users over the course of a one month period sometime prior to March 2004. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms. The dataset may serve as a testbed for matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 423 MB.
 

Yahoo! Movies User Ratings and Descriptive Content Information, v.1.0 (23 MB)

This dataset contains a small sample of the Yahoo! Movies community’s preferences for various movies, rated on a scale from A+ to F. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset also contains a large amount of descriptive information about many movies released prior to November 2003, including cast, crew, synopsis, genre, average ratings, awards, etc. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms, including hybrid content and collaborative filtering algorithms. The dataset may serve as a testbed for relational learning and data mining algorithms as well as matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 23 MB.
 

Yahoo News Video dataset, version 1.0 (645MB)

The dataset is a collection of 964 hours (22K videos) of news broadcast videos that appeared on Yahoo news website’s properties, e.g., World News, US News, Sports, Finance, and a mobile application during August 2017. The videos were either part of an article or displayed standalone in a news property. Many of the videos served in this platform lack important metadata, such as an exhaustive list of topics associated with the video. We label each of the videos in the dataset using a collection of 336 tags based on a news taxonomy designed by in-house editors. In the taxonomy, the closer the tag is to the root, the more generic (topically) it is.
etc…

Other Datasets

More than 1 TB

  • The 1000 Genomes project makes 260 TB of human genome data available
  • The Internet Archive is making an 80 TB web crawl available for research 
  • The TREC conference made the ClueWeb09 [3] dataset available a few years back. You’ll have to sign an agreement and pay a nontrivial fee (up to $610) to cover the sneakernet data transfer. The data is about 5 TB compressed.
  • ClueWeb12  is now available, as are the Freebase annotations, FACC1 
  • CNetS at Indiana University makes a 2.5 TB click dataset available 
  • ICWSM made a large corpus of blog posts available for their 2011 conference. You’ll have to register (an actual form, not an online form), but it’s free. It’s about 2.1 TB compressed. The dataset consists of over 386 million blog posts, news articles, classifieds, forum posts and social media content between January 13th and February 14th. It spans events such as the Tunisian revolution and the Egyptian protests (see http://en.wikipedia.org/wiki/January_2011 for a more detailed list of events spanning the dataset’s time period). Access it here
  • The Yahoo News Feed dataset is 1.5 TB compressed, 13.5 TB uncompressed
  • The Proteome Commons makes several large datasets available. The largest, the Personal Genome Project , is 1.1 TB in size. There are several others over 100 GB in size.

More than 1 GB

  • The Reference Energy Disaggregation Data Set  has data on home energy use; it’s about 500 GB compressed.
  • The Tiny Images dataset  has 227 GB of image data and 57 GB of metadata.
  • The ImageNet dataset  is pretty big.
  • The MOBIO dataset  is about 135 GB of video and audio data
  • The Yahoo! Webscope program makes several 1 GB+ datasets available to academic researchers, including an 83 GB data set of Flickr image features and the dataset used for the 2020 KDD Cup , from Yahoo! Music, which is a bit over 1 GB.
  • Freebase makes regular data dumps available. The largest is their Quad dump , which is about 3.6 GB compressed.
  • Wikipedia made a dataset containing information about edits available for a recent Kaggle competition [6]. The training dataset is about 2.0 GB uncompressed.
  • The Research and Innovative Technology Administration (RITA) has made available a dataset about the on-time performance of domestic flights operated by large carriers. The ASA compressed this dataset and makes it available for download.
  • The wiki-links data made available by Google is about 1.75 GB total.
  • Google Research released a large 24GB n-gram data set back in 2006 based on processing 10^12 words of text and published counts of all sequences up to 5 words in length.

Power and Energy Consumption Open Datasets

These data are intended to be used by researchers and other professionals working in power and energy related areas and requiring data for design, development, test, and validation purposes. These data should not be used for commercial purposes.

The Million Playlist Dataset (Spotify)

A dataset and open-ended challenge for music recommendation research ( RecSys Challenge 2018). Sampled from the over 4 billion public playlists on Spotify, this dataset of 1 million playlists consist of over 2 million unique tracks by nearly 300,000 artists, and represents the largest public dataset of music playlists in the world. Access it here

Regression Analysis Cheat Sheet

Hotel Reviews Dataset from Yelp

20k+ Hotel Reviews from Yelp for 5 Star Hotels in Las Vegas.

This dataset can be used for the following applications and more:

Analyzing trends,  Sentiment Analysis / Opinion Mining, Sentiment Analysis / Opinion Mining, Competitor Analysis. Access it here.

A truncated version with 500 reviews is also available on Kaggle here

Motorcycle Crash data

1- Texas: Perform specific queries and analysis using Texas traffic crash data.

2- BTS: Motorcycle Rider Safety Data

3- National Transportation Safety Board: US Transportation Fatalities in 2019

4- Fatal single vehicle motorcycle crashes

5- Motorcycle crash causes and outcomes : pilot study

6- Motorcycle Crash Causation Study: Final Report

Download a collection of news articles relating to natural disasters over an eight-month period. Access it here.

World Population Data by Country and Age Group

1- WorldoMeter: Countries in the world by population (2021)

2- Worldometer: Current World Population Live

Top 10 richest billionaires from 1987-2021

Top 10 Richest People in the world

Source: Here

How Americans Spend Money on Halloween

How Americans Spend Money on Halloween

Source: here

How the Duration of an Average World Series Baseball Game Has Changed Over 118 Years

r/dataisbeautiful - [OC] How the Duration of an Average World Series Baseball Game Has Changed Over 118 Years

Source: Here

Investment-Related Dataset with both Qualitative and Quantitative Variables

1- Numer.ai:  Anonymized and feature normalized financial data which is interesting for machine learning applications. Download here

2- Snowflake Data Marketplace: Snowflake Data Marketplace gives data scientists, business intelligence and analytics professionals, and everyone who desires data-driven decision-making, access to more than 375 live and ready-to-query data sets from more than 125 third-party data providers and data service providers

3- Quandl: The premier source for financial, economic and alternative datasets, serving investment professionals.

National Obesity Monitor

The National Health and Nutrition Examination Survey (NHANES) is conducted every two years by the National Center for Health Statistics and funded by the Centers for Disease Control and Prevention. The survey measures obesity rates among people ages 2 and older. Find the latest national data and trends over time, including by age group, sex, and race. Data are available through 2017-2018, with the exception of obesity rates for children by race, which are available through 2015-2016. Access here

State of Childhood Obesity
State of Childhood Obesity

The World’s Nations by Fertility Rate 2021

The world nation 's fertility rates
The world’s nations fertility rates

Total number of deaths due to Covid19 vis-à-vis Population in million

Total number of deaths due to Covid19 vis-à-vis Population in million
Total number of deaths due to Covid19 vis-à-vis Population in million

Google searches for different emotions during each hour of the day and night

Google searches for different emotions during each hour of the day and night
Google searches for different emotions during each hour of the day and night

Where do the world’s CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions

Where do the world's CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions
Where do the world’s CO2 emissions come from? This map shows emissions during 2019. Darker areas indicate areas with higher emissions

Global Linguistic Diversity

Global Linguistic Diversity
Global Linguistic Diversity

Where in the world are the densest forests? Darker areas represent higher density of trees.

Where in the world are the densest forests? Darker areas represent higher density of trees.
Where in the world are the densest forests? Darker areas represent higher density of trees.

Likes and Dislikes per movie genre

Like and Dislike per movie genre
Like and Dislike per movie genre

Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset

NCEI first developed the Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset in the early 1990s. Subsequent iterations include version 2 in 1997, version 3 in May 2011, and version 4 in October 2018.

Are there any places where the climate is recently getting colder?
Are there any places where the climate is recently getting colder?

Electric power consumption (kWh per capita)

The World’s Most Eco-Friendly Countries

Alternate Source from Wikipedia : List of countries by carbon dioxide emissions per capita

List of countries by carbon dioxide emissions per capita
List of countries by carbon dioxide emissions per capita
Worldwide CO2 Emission
Worldwide CO2 Emission

Alcohol-Impaired Driving Deaths by State & County [US]

Alcohol Impaired Driving by State
Alcohol Impaired Driving by State

Alcohol Impaired driving by counties
Alcohol Impaired driving by county

% change in life expectancy from 2020 to 2021 across the globe

% change in life expectancy from 2020 to 2021 across the globe
% change in life expectancy from 2020 to 2021 across the globe

This is how life expectancy is calculated.

How Many Years Till the World’s Reserves Run Out of Oil?

How Many Years Till the World's Reserves Run Out of Oil?
How Many Years Till the World’s Reserves Run Out of Oil?

Data Source Here: Note that these values can change with time based on the discovery of new reserves, and changes in annual production.

Which energy source has the least disadvantages?

How many People Did Nuclear Energy Kill?

Here’s a paper on the wind fatalities

ipcc.ch/site/assets/

Human development index (HDI) by world subdivisions

Human development index (HDI) by world subdivisions
Human development index (HDI) by world subdivisions

The Human Development Index (HDI) is a statistic composite index of life expectancy, education (mean years of schooling completed and expected years of schooling upon entering the education system), and per capita income indicators, which are used to rank countries into four tiers of human development.

Data sourcesubnational human development index website 

US Streaming Services Market Share, 2020 vs 2021

US Streaming Services Market Share, 2020 vs 2021
US Streaming Services Market Share, 2020 vs 2021

Number of tweets deleted by month

Number of tweets deleted by month in 2020
Number of tweets deleted by month in 2020

Tweet Deleter

Football/Soccer Leagues with the fairest distributions of money have seen the most growth in long-term global interest.

Football Leagues with the fairest distributions of money have seen the most growth in long-term global interest.
Football Leagues with the fairest distributions of money have seen the most growth in long-term global interest.

How Much Does Your Favorite Fast Food Brand Spend on Ads?

Sources:

mcdonald-s-advertising-spending-worldwide/

ad-spend-subway-usa/

dominos-pizza-advertising-spending-usa/

ad-spend-wednys-usa/

ad-spend-burger-king-usa/

advertising-expense-chick-fil-a/

starbucks-advertising-spending-in-the-us

Historical population count of Western Europe

[OC] Historical population count of Western Europe from dataisbeautiful

Results from survey on how to best reduce your personal carbon footprint

Results from survey on how to best reduce your personal carbon footprint
Results from survey on how to best reduce your personal carbon footprint

Data from IpsosMori

Where does the world’s non-renewable energy come from? 

r/dataisbeautiful - Where does the world's non-renewable energy come from? Zoom in to see a point for each power plant! [OC]

The data comes from the Global Power Plant Database. The Global Power Plant Database is a comprehensive, open source database of power plants around the world. It centralizes power plant data to make it easier to navigate, compare and draw insights for one’s own analysis. The database covers approximately 30,000 power plants from 164 countries and includes thermal plants (e.g. coal, gas, oil, nuclear, biomass, waste, geothermal) and renewables (e.g. hydro, wind, solar). Each power plant is geolocated and entries contain information on plant capacity, generation, ownership, and fuel type. It will be continuously updated as data becomes available.

Recorded Music Industry Revenues from 1997 to 2020

[OC] Recorded Music Industry Revenues from 1997 to 2020 from dataisbeautiful

Source: riaa.com/

US Trade Surpluses and Deficits by Country (2020)

Facebook Monthly Active Users

Facebook data is based on the end of year from 2004 to 2020

Facebook monthly active users

Source: SeeMetrics.com

Heat map of the past 50,000 earthquakes pulled from USGS sorted by magnitude

[OC] This is a heat map of the past 50,000 earthquakes pulled from USGS sorted by magnitude. from dataisbeautiful

Source:  USGS website

Where do the world’s methane (CH4)emissions come from?

Darker areas indicate areas with higher emissions.

Where do the world’s methane (CH4)emissions come from? Darker areas indicate areas with higher emissions. [OC] from dataisbeautiful

Source: Data comes from EDGARv5.0 website and Crippa et al. (2019)

Earth Surface Albedo (1950 to 2020)

Data Source: ECMWF ERA5

Wealth of Forbes’ Top 100 Billionaires vs All Households in Africa

Sources:
Forbes’ 35th Annual World’s Billionaires List
Credit Suisse Global Wealth Report 2020
United Nations World Population Prospects

Forbes Billionaires list

United nations world population prospects

Credit Suisse Global Wealth Report 2020

20 years of Apple sales in a minute

Source: Apple’s quarterly and annual financial filings with the SEC over the last 20 years

Source: Wikipedia

Racial Diversity of Each State (Based on US Census 2019 Estimates)

r/dataisbeautiful - [OC] Racial Diversity of Each State (Based on US Census 2019 Estimates)

Computation:

Suppose your state is 60% orc, 30% undead, and 10% tauren. You chance in a random selection of two being of the same race is as follows:

  • 36% chance ((60%)2) of two orcs

  • 9% chance ((30%)2) of two undead

  • 1% chance ((10%)2) of two tauren

For a total of 46%. The diversity index would be 100% minus that, or 54%.

Race and Ethnicity in the US

A curated, daily feed of newly published datasets in machine learning

Machine Learning: CIFAR-10 Dataset

A curated, daily feed of newly published datasets in machine learning

The CIFAR-10 dataset consists of 60000 32×32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

Machine Learning: ImageNet

The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images.

Machine Learning: The MNIST Database of Handwritten Digits

The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. Access it here.

The Massively Multilingual Image Dataset (MMID)

MMID is a large-scale, massively multilingual dataset of images paired with the words they represent collected at the University of Pennsylvania. The dataset is doubly parallel: for each language, words are stored parallel to images that represent the word, and parallel to the word’s translation into English (and corresponding images.) . Dcumentation.

AWS CLI Access (No AWS account required)

aws s3 ls s3://mmid-pds/ --no-sign-request

AWS Azure Google Cloud Cloud Certification Exam Prep App
AWS Azure Google Cloud Cloud Certification Exam Prep App: AWS CCP Cloud Practitioner CLF-C01, AWS Solution Architect Associate SAA-C02, AWS Developer Associate DEV-C01, AWS DAS-C01, Azure Fundamentals AZ900, Azure Administrator AZ104, Google Associate Cloud Engineer, AWS Specialty Data Analytics DAS-C01, AWS and Google Professional Machine Learning Specialty MLS-C01

Capitol insurrection arrests per million people by state

[OC] Capitol insurrection arrests per million people by state from dataisbeautiful

How have cryptocurrencies done during the Pandemic?

[OC] How have cryptocurrencies done during the Pandemic? from dataisbeautiful

Data Source: Downloaded performance data on these cryptocurrencies from Investing.com which provides free historic data

Share of US Wealth by Generation

r/dataisbeautiful - Share of US Wealth by Generation [OC]

Source: US Federal Reserve

Top 100 Cryptocurrencies by Market Cap

Top 100 Cryptocurrencies by Market Cap

Data Source from coinmarketcap.com/

 Crypto race: DOGE vs BTC, last 365 days

[OC] Crypto race: DOGE vs BTC, last 365 days (now with axes and % gain annotated) from dataisbeautiful

Data sources: Coindesk BTC, Coindesk Dodge

 Yearly Performance of TOP 100 cryptocurrencies
Yearly Performance of TOP 100 cryptocurrencies

What if you bought $100 worth of X a year ago? [OC] from dataisbeautiful

12,000 years of human population dynamics

[OC] 12,000 years of human population dynamics v2.0 – slower & more frames from dataisbeautiful

Countries with a higher Human Development Index (HDI) than the European Union (EU)

HDI is calculated by the UN every year to measure a country’s development using average life expectancy, education level, and gross national income per capita (PPP). The EU has a collective HDI of 0.911.

Data Source: Here

Countries with a higher Human Development Index (HDI) than the United States (US)

Data source: Human Development Report 2020

Child marriage by country, by gender

Data on the percentage of children married before reaching adulthood (18 years).

Data source The State of the World’s Children 2019

 

Wars with greater than 25,000 deaths by year

[OC]Modern wars with greater than 25,000 deaths by year from dataisbeautiful

Data Source : Wikipedia

Population Projection for China and India till 2050

This graphic shows India’s population overtaking China
This graphic shows India’s population overtaking China

Data Source: Here

Relative cumulative and per capita CO2 emissions 1751-2017

 

Relative cumulative and per capita CO2 emissions 1751-2017
Relative cumulative and per capita CO2 emissions 1751-2017

Dat Source: ourworldindata.org

Formula 1 Cumulative Wins by Team (1950-2021)

[OC] – Formula 1 Cumulative Wins by Team (1950-2021) from dataisbeautiful

Data Source : f1-fansite.com/f1-results/

Countries with the most nuclear warheads. A couple of days ago I posted this with a logarithmic scale.

[OC] Countries with the most nuclear warheads. A couple of days ago I posted this with a logarithmic scale. A lot of people thought that was confusing, here is the linear scale. from dataisbeautiful

Data source: Wikipedia

Using machine learning methods to group NFL quarterbacks into archetypes

Using machine learning methods to group NFL quarterbacks into archetypes
Using machine learning methods to group NFL quarterbacks into archetypes

Data Source:

Data collected from a  series of rushing and passing statistics for NFL Quarterbacks from 2015-2020 and performed a machine learning algorithm called clustering, which automatically sorts observations into groups based on shared common characteristics using a mathematical “distance metric.”

The idea was to use machine learning to determine NFL Quarterback Archetype to agnostically determine which quarterbacks were truly “mobile” quarterbacks, and which were “pocket passers” that relied more on passing. I used a number of metrics in my actual clustering analysis, but they can be effectively summarized across two dimensions: passing and rushing, which can be further roughly summarized across two metrics: passer rating and rushing yards per year. Plotting the quarterbacks along these dimensions and plotting the groups chosen by the clustering methodology shows how cleanly the methodology selected the groups.

Read this blog article on the process for more information if you’re interested, or just check out this blog in general if you found this interesting!

Data: Collected from the ESPN API

2M rows of 1-min S&P bars (12 years of stock data) – 2008-2021

Intraday Stock Data (1 min) – S&P 500 – 2008-21: 12 years of 1 minute bars for data science / machine learning.

Granular stock bar data for research is difficult to find and expensive to buy. The author has compiled this library from a variety of sources and is making it available for free.

One compressed CSV file with 9 columns and 2.07 million rows worth of 1 minute SPY bars.  Access it here

A global database of COVID-19 vaccinations

Cumulative number of COVID-19 doses administered by country.
Cumulative number of COVID-19 doses administered by country.
COVID-19 vaccine doses administered per 100 people versus gross domestic product per capita.
COVID-19 vaccine doses administered per 100 people versus gross domestic product per capita.
Timeline of innovation in the development of vaccines.
Timeline of innovation in the development of vaccines.

Datasets: A live version of the vaccination dataset and documentation are available in a public GitHub repository here. These data can be downloaded in CSV and JSON formats. PDF.

 A list of available datasets for machine learning in manufacturing

Industrial ML Datasets: curated list of datasets, publicly available for machine learning researches in the area of manufacturing.

Predictive Maintenance and Condition Monitoring

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format
Diesel Engine Faults Features 2020 Signal 84 C (4) 3.500   Synthetic MAT Link

Process Monitoring

Name Year Feature Type Feature Count Target Variable Instances Official Train/Test Split Data Source Format  
High Storage System Anomaly Detection 2018 Signal 20 C (2) 91.000   Synthetic CSV Link

Predictive Quality and Quality Inspection

NameYearFeature TypeFeature CountTarget VariableInstancesOfficial Train/Test SplitData SourceFormat
Casting Product Quality Inspection2020Image300×300
512×512
C (2)7.348✔️RealJPGLink

Process Parameter Optimization

NameYearFeature TypeFeature CountInstancesOfficial Train/Test SplitData SourceFormat
Laser Welding2020Signal13361RealXLSLink

Data Analytics Certification Questions and Answers Dumps

Datasets needed for Crop Disease Identification using image processing

Here is a collection of datasets with images of leaves

and more generic image datasets that include plant leaves

http://visualgenome.org/

http://image-net.org/

Plant Phenotyping

One hundreds plant species datasets

cvonline 

A Database of Leaf Images: Practice towards Plant Conservation with Plant Pathology

Survival Analysis datasets for machines

Download it here

English alphabet organized by each letter’s note in ABC

English alphabet organised by each letter's note in ABC
English alphabet organized by each letter’s note in ABC

Discover datasets hosted in thousands of repositories across the Web using datasetsearch.research.google.com

#dataset #search   @Google

Create, maintain, and contribute to a long-living dataset that will update itself automatically across projects.

Datasets should behave like git repositories.

Image

Datasets should behave like git repositories
Datasets should behave like git repositories

Learn how to create, maintain, and contribute to a long-living dataset that will update itself automatically across projects, using git and DVC as versioning systems, and DAGsHub as a host for the datasets. 

Human Rights Measurement Initiative Datasets

Image

World Wide Energy Production by Source 1860 – 2019

[OC] World Wide Energy Production by Source 1860 – 2019 from dataisbeautiful

Data source: ourworldindata.org/energy

 Project Sunroof – Solar Electricity Generation Potential by Census Tract/Postal Code

 Courtesy of Google’s Project Sunroof: This dataset essentially describes the rooftop solar potential for different regions, based on Google’s analysis of Google Maps data to find rooftops where solar would work, and aggregate those into region-wide statistics.

It comes in a couple of aggregation flavors – by census tract , where the region name is the census tract id, and by postal code , where the name is the postal code. Each also contains latitude/longitude bounding boxes and averages, so that you can download based on that, and you should be able to do custom larger aggregations using those, if you’d like.

Carbon emission arithmetic + hard v. soft science

carbon emission arithmetic + hard v. soft science [oc] from dataisbeautiful

Data sources: Video From data-driven documentary The Fallen of World War II. Here and Here

Most popular Youtuber in every country 2021

What Does 1GB of Mobile Data Cost in Every Country?

What Does 1GB of Mobile Data Cost in Every Country?

Key Concepts of Data Science

A large dataset aimed at teaching AI to code, it consists of some 14M code samples and about 500M lines of code in more than 55 different programming languages, from modern ones like C++, Java, Python, and Go to legacy languages like COBOL, Pascal, and FORTRAN.

GitHub repo:

Download page

NSRDB: National Solar Radiation Database

 Download instructions are here

Cheat Sheet for Machine Learning, Data Science.

No photo description available.
Cheat Sheet for Machine Learning and Data Science

Emigrants from the UK by Destination

 

r/dataisbeautiful - [OC] Emigrants from the UK by Destination

Data source: Originally at the location marked on the Sankey Flow but is now here

 

Direct link to the spreadsheet used

US Rivers and Streams Dataset

Data source: hub.arcgis.com/

Data visualization

r/dataisbeautiful - [OC] US Rivers and Streams

Bubble Chart that compares the GDP of the G20 Countries

Data source: databank.worldbank.org

Desktop OS Market Share 2003 – 2021

[OC] Desktop OS Market Share 2003 – 2021 from dataisbeautiful

Data source: w3school

National Parks of North America

r/dataisbeautiful - [OC] National Parks of North America

Data Source: DataBayou

 NPS.gov, Open.canada.ca, and sig.conanp.gob.mx 

Inflation of Bitcoin and DogeCoin vs. Federal Reserve target

r/dataisbeautiful - [OC] Inflation of Bitcoin and DogeCoin vs. Federal Reserve target

Data source:

Percentage of women who experienced physical or sexual violence since the age of 15 in the EU

r/dataisbeautiful - Percentage of women who experienced physical or sexual violence since the age of 15 in the EU

Data Source from The Guardian: 

The whole report –  Questionnaire

Canadian Interprovincial Migration

Canadian Interprovincial Migration
Canadian Interprovincial Migration

Some context  here

Data  scraped from StatsCan

Covid-19 Vaccination Doses Administered per 100 in the G20

Data source: ourworldindata.org covid-vaccinations

What does per 100 mean?

When the whole country is double vaccinated, the value will be 200 doses per 100 population. At the moment the UK is like 85, which is because ~70% of the population has had at least one dose and ~15% of the population (which is a subset of that 70%) have had two. Hence ~30% are currently unprotected – myself included until Sunday.

Import/Export of Conventional Arms by Different Countries over past 2 decades

DataSource: SIPRI Arms Transfer Database

Aggregated disease comparison dataset – Ensemble de données agrégées de comparaison des maladies

Data Source: Here and Here

According to the author of the source data: “For the 1918 Spanish Flu, the data was collected by knowing that the total counts were 500M cases and 50M deaths, and then taking a fraction of that per day based on the area of this graph image:” – the graph is used is here:

Visualización y conjunto de datos de comparación de enfermedades agregadas

Trending Google Searches by State Between 2018 and 2020 – Tendances des recherches Google par État

Data source: trends.google.com Trending topics from 2010 to 2019 were taken from Google’s annual Year in Search summary 2010-2029

The full, ~11 minute video covering the whole 2010s decade is available here at youtu.be/xm91jBeN4oo

Google Trends provides weekly relative search interest for every search term, along with the interest by state. Using these two datasets for each term, we’re able to calculate the relative search interest for every state for a particular week. Linear interpolation was used to calculate the daily search interest.

 

Market capitalization in billion dollars of Top 20 Cryptocurrencies in 2021-05-20 – crypto-monnaies

Data source: CoinMarket from end of 2013 until present

Capitalisation boursière en milliards de dollars des 20 principales crypto-monnaies en 2021-05-20

 

Top Chess Players From 2000-2020, Meilleurs joueurs d’échecs,  Лучшие шахматисты с 2000 по 2020 год

Data source: ratings.fide.com/

The y-axis is the world elo ratings (called FIDE ratings).

Comparing Emissions Sources – How to Shrink your Carbon Footprint More Effectively

r/dataisbeautiful - [OC] Comparing Emissions Sources - How to Shrink your Carbon Footprint More Effectively

 Data sources: Here

Source article: Here

Oil and gas-fired power plants in the world –

La dependencia de los combustibles fósiles – La dépendance aux énergies fossiles – 

r/dataisbeautiful - [OC] Oil and gas-fired power plants in the world

Data is from the Global Power Plant Database (World Resources Institute)

See map’s description here

Plantas de electricidad que funcionan con gas y petróleo

Top 100 Reddit posts of all time

r/dataisbeautiful - [OC] I recently made a graph showing the Top 100 Reddit posts of all time. Some people said I should make a pie chart too, so here it is!

Source: r/all on Reddit

Tool used: meta-chart.com

Fastest routes on land (and sometimes, boat) between all 990 pairs of European capitals

Las rutas más rápidas en tierra (y, a veces, en barco) entre los 990 pares de capitales europeas

Les itinéraires les plus rapides sur terre (et parfois en bateau) entre les 990 paires de capitales européennes

Source: Reddit

From the author: I started with data on roads from naturalearth.com, which also includes some ferry lines. I then calculated the fastest routes (assuming a speed of 90 km/h on roads, and 35 km/h on boat) between each pair of 45 European capitals. The animation visualizes these routes, with brighter lines for roads that are more frequently “traveled”.

In reality these are of course not the most traveled roads, since people don’t go from all capitals to all other capitals in equal measure. But I thought it would be fun to visualize all the possible connections.

The model is also very simple, and does not take into account varying speed limits, road conditions, congestion, border checks and so on. It is just for fun!

In order to keep the file size manageable, the animation only shows every tenth frame.

Is Russia, Turkey or country X really part of Europe? That of course depends on the definition, but it was more fun to include them than to exclude them! The Vatican is however not included since it would just be the same as the Rome routes. And, unfortunately, Nicosia on Cyprus is not included to due an error on my behalf. It should be!

Link to final still image in high resolution on my twitter

Pokemon Dataset

  1. Dataset of all 825 Pokemon (this includes Alolan Forms). It would be preferable if there are at least 100 images of each individual Pokemon.

pokedex: This is a Python library slash pile of data containing a whole lot of data scraped from Pokémon games. It’s the primary guts of veekun.

pokeapi.co/about

2) This dataset comprises of more than 800 pokemons belonging up to 8 generations.

Using this dataset have been fun for me. I used it to create a mosaic of pokemons taking image as reference. You can find it here and it’s free to use: Couple Mosaic (powered by Pokemons)

Here is the data type information in the file:

  • Name: Pokemon Name
  • Type: Type of Pokemon like Grass / Fire / Water etc..,.
  • HP: Hit Points
  • Attack: Attack Points
  • Defense: Defence Points
  • Sp. Atk: Special Attack Points
  • Sp. Def: Special Defence Points
  • Speed: Speed Points
  • Total: Total Points
  • url: Pokemon web-page
  • icon: Pokemon Image

Data File: Pokemon-Data.csv

30×30 m Worldwide High-Resolution Population and Demographics Data

ETL pipeline for Facebook’s research project to provide detailed large-scale demographics data. It’s broken down in roughly 30×30 m grid cells and provides info on groups by age and gender.

Population Density Overview

Data Source and API for access

Article about Dataset at Medium

Gridded global datasets for Gross Domestic Product and Human Development Index over 1990–2015

Rasterized GDP dataset – basically a heat map of global economic activity.

Gap-filled multiannual datasets in gridded form for Gross Domestic Product (GDP) and Human Development Index (HDI)

Data source here:

Decrease in worldwide infant mortality from 1950 to 2020

Post image

Data Sources: United Nations, CIA World Factbook, IndexMundi.

Data Collectors

Data Unblockers

Countries of the world sorted by those that have warmed the most in the last 10 years, showing temperatures from 1890 to 2020 

r/dataisbeautiful - Countries of the world sorted by those that have warmed the most in the last 10 years, showing temperatures from 1890 to 2020 [OC]

Data source: Gistemp temperature data

The GISS Surface Temperature Analysis ver. 4 (GISTEMP v4) is an estimate of global surface temperature change. Graphs and tables are updated around the middle of every month using current data files from NOAA GHCN v4 (meteorological stations) and ERSST v5 (ocean areas), combined as described in our publications Hansen et al. (2010) and Lenssen et al. (2019).

Climate change concern vs personal spend to reduce climate change

r/dataisbeautiful - [OC] Climate change concern vs personal spend to reduce climate change

Data Source: Competitive Enterprise Institute (PDF)

 Less than 20 firms produce over a third of all carbon emissions

The Illusion of Choice in Consumer Brands

The Illusion of Choice in Consumer Brands

Buying a chocolate bar? There are seemingly hundreds to choose from, but its just the illusion of choice. They pretty much all come from Mars, Nestlé, or Mondelēz (which owns Cadbury).

Source: Visual Capitalist

Yearly Software Sales on PlayStation Consoles since 1994

r/dataisbeautiful - [OC] Yearly Software Sales on PlayStation Consoles since 1994

Some context for these numbers :

  • PS4 holds the record for being the console to have sold the most games in video game history (> 1.622B units)
    • Previous record holder was PS2 at 1.537B games sold
  • PS4 holds the record for having sold the most games in a single year (> 300M units in FY20)
  • FY20 marks the biggest yearly software sales in PlayStation ecosystem with more than 338M units
  • Since PS5 release, Sony starts combining PS4/PS5 software sales
  • In FY12, Sony combined PS2/PS3 and PSP/VITA software sales
  • Sony stopped disclosing software sales in FY13/14

Yearly Hardware Sales of PlayStation Consoles since 1994

r/dataisbeautiful - [OC] Yearly Hardware Sales of PlayStation Consoles since 1994

Sony combined PS2/PS3 hardware sales in FY12 and combined PSP/VITA sales in FY12/13/14

Cybertruck vs F150 Lightning pre-orders, by time since debut

r/dataisbeautiful - [OC] Cybertruck vs F150 Lightning pre-orders, by time since debut

Source: Ford exec tweeting about preorder numbers this week

Top 100 Most Populous City Proper in the world

r/dataisbeautiful - (Fixed once again) Top 100 Most Populous City Proper in the world. [OC]

The City with 32 million is Chongqing, Shan is Shanghai, Beijin is Beijing, and Guangzho is Guangzhou

 

Tax data for different countries

Dataset is here

What do Europeans feel most attached to – their region, their country, or Europe?

r/dataisbeautiful - [OC] What do Europeans feel most attached to - their region, their country, or Europe?

Data source: Builds on data from the 2021 European Quality of Government Index. You can read more about the survey and download the data here

Cost of 1gb mobile data in every country

r/dataisbeautiful - Cost of 1gb mobile data in every country

r/dataisbeautiful - Cost of 1gb mobile data in every country

Dataset: Visual Capitalist

Frequency of all digrams in 18 languages, diacritics included 

r/dataisbeautiful - Langues germaniques

Dataset (according to author): Dictionaries are scattered on the internet and had to be borrowed from several sources: the Scrabble3d project, and Linux spellcheck dictionaries. The data can be found in the folder “Avec_diacritiques”.

Criteria for choosing a dictionary:
– No proper nouns
– “Official” source if available
– Inclusion of inflected forms
– Among two lists, the largest was fancied
– No or very rare abbreviations if possible- but hard to detect in unknown languages and across hundreds of thousands of words.

Mapped: The World’s Nuclear Reactor Landscape

r/dataisbeautiful - Mapped: The World’s Nuclear Reactor Landscape

Dataset: Visual Capitalist

Database of 999 chemicals based on liver-specific carcinogenicity

The author found this dataset in a more accessible format upon searching for the keyword “CDPB” (Carcinogenic Potency Database) in the National Library of Medicine Catalog. Check out this parent website for the data source and dataset description. The dataset referenced in OP’s post concerns liver specific carcinogens, which are marked by the “liv” keyword as described in the dataset description’s Tissue Codes section.

SMS Spam Collection Data Set

DownloadData FolderData Set Description

The SMS Spam Collection is a public set of SMS labeled messages that have been collected for mobile phone spam research

Open Datasets for Autonomous Driving

A2D2 DatasetApolloScape Dataset Argoverse Dataset Berkeley DeepDrive Dataset

CityScapes DatasetComma2k19 DatasetGoogle-Landmarks Dataset

KITTI Vision Benchmark SuiteLeddarTech PixSet Dataset Level 5 Open DatanuScenes Dataset

Oxford Radar RobotCar DatasetPandaSet Udacity Self Driving Car Dataset Waymo Open Dataset

Open Dataset people are looking for [Help if you can]

  1. Looking for Dataset on the outcomes of abstinence-only sex education.
  2. Funny Datasets for School Data Science Project [1, 2, 3, 4, 5]
  3. Need a dataset for English practicing chatbot. [1, 2 ]
  4. Creating a dataset for plant disease recognition [1, 2 ]
  5. Central Bank Speeches Dataset (Text data from 1997 to 2020 from 118 institutions) [1, 2]
  6. Cat Meow Classification dataset [1, 2]
  7. Looking for Raw Data: Camping / Outdoors Travel in United States trends, etc [1, 2 ]
  8. Looking for Data set of horse race results / lottery results any results related to gambling [1, 2, 3]
  9. Looking for Football (Soccer) Penalties Dataset [1, 2]
  10. Looking for public datasets on baseball [1, 2, 3]
  11. Looking for Datasets on edge computing for AI bandwidth usage, latency, memory, CPU/GPU resource usage? [1 ,2 ]
  12. Data set of people who died by suicide [1, 2 ]
  13. Supreme Court dataset with opinion text? [1, 2, 3, 4, https://storage.googleapis.com/scotus-db/scotus-db.db5]
  14. Dataset of employee attrition or turnover rate? [1, 2]
  15. Is there a Dataset for homophobic tweets? [1 ,2, 3, 4, ]
  16. Looking for a Machine condition Monitoring Dataset [1,2]
  17. Where to find data for credit risk analysis? [1, 2]
  18. Datasets on homicides anywhere in the world [1, 2]
  19. Looking for a dataset containing coronavirus self-test (if this is a thing globally) pictures for ML use
  20. Is there any transportation dataset with daily frequency? [1, 2]
  21. A Dataset of film Locations [1, 2 ]
  22. Looking for a classification dataset [1, 2, 3, 4, 5]
  23. Where can I search for macroeconomics data? [1, 2, 3, 4, 5, 6, 7]
  24. Looking for Beam alignment 5G vehicular networks dataset
  25. Looking for tidy dataset for multivariate analysis (PCA, FA, canonical correlations, clustering)
  26. Indian all types of Fuel location datasets [1, 2,]
  27. Spotify Playlists Dataset [1, 2]
  28. World News Headline Dataset. With Sentiment Scores. Free download in JSON format. Updated often. [1, 2]
  29. Are there any free open source recipe datasets for commercial use [1, 2, 3, 4, 5]
  30. Curated social network datasets with summary statistics and background info
  31. Looking for textile crop disease datasets such as jute, flax, hemp
  32. Shopify App Store and Chrome Webstore Datasets
  33. Looking for dataset for university chatbot
  34. Collecting real life (dirty/ugly) datasets for data analysis
  35. In Need of Food Additive/Ingredient Definition Database
  36. Recent smart phone sensor Dataset – Android
  37. Cracked Mobile Screen Image Dataset for Detection
  38. Looking for Chiller fault data in a chiller plant
  39. Looking for dataset that contains the genetic sequences of native plasmids?
  40. Looking for a dataset containing fetus size measurements at various gestational ages.
  41. Looking for datasets about mental health since 2021
  42. Do you know where to find a dataset with Graphical User Interfaces defects of web applications? [1, 2, 3 ]
  43. Looking for most popular accounts on social medias like Twitter, Tik Tok, instagram, [1, 2, 3]
  44. GPS dataset of grocery stores
  45. What is the easiest way to bulk download all of the data from this epidemiology website? (~20,000 files)
  46. Looking for Dataset on Percentage of death by US state and Canadian province grouped by cause of death?
  47. Looking for Social engineering attack dataset in social media
  48. Steam Store Games (Clean dataset) by Nik Davis
  49. Dataset that lists all US major hospitals by county
  50. Another Data that list all US major hospitals by county
  51. Looking for open source data relating privacy behavior or related marketing sets about the trustworthiness of responders?
  52. Looking for a dataset that tracks median household income by country and year
  53. Dataset on the number of specific surgical procedures performed in the US (yearly)
  54. Looking for a dataset from reddit or twitter on top posts or tweets related to crypto currency
  55. Looking for Image and flora Dataset of All Known Plants, Trees and Shrubs
  56. US total fertility rates data one the state level
  57. Dataset of Net Worth of *World* Politicians
  58. Looking for water wells and borehole datasets
  59. Looking for Crop growth conditions dataset
  60. Dataset for translate machine JA-EG
  61. Looking for Electronic Health Record (EHR) record prices
  62. Looking for tax data for different countries
  63. Musicians Birthday Datasets and Associated groups
  64. Searching for dataset related to car dealerships [1]
  65. Looking for Credit Score Approval dataset
  66. Cyberbullying Dataset by demographics
  67. Datasets on financial trends for minors
  68. Data where I can find out about reading habits? [1, 2]
  69. Data sets for global technology adoption rates
  70. Looking for any and all cat / feline cancer datasets, for both detection and treatment
  71. ITSM dictionary/taxonomy datasets for topic modeling purposes
  72. Multistage Reliability Dataset
  73. Looking for dataset of ingredients for food[1]
  74. Looking for datasets with responses to psychological questionnaires[1,2,3]
  75. Data source for OEM automotive parts
  76. Looking for dataset about gene regulation
  77. Customer Segmentation Datasets (For LTV Models)
  78. Automobile dataset, years of ownership and repairs
  79. Historic Housing Prices Dataset for Individual Houses
  80. Looking for the data for all the tokens on the Uniswap graph
  81. Job applications emails datasets, either rejection, applications or interviews
  82. E-learning datasets for impact on e learning on school/university students
  83. Food delivery dataset (Uber Eats, Just Eat, …)
  84. Data Sets for NFL Quarterbacks since 1995
  85. Medicare Beneficiary Population Data
  86. Covid 19 infected Cancer Patients datasets
  87. Looking for  EV charging behavior dataset
  88. State park budget or expansionary spending dataset
  89.  Autonomous car driving deaths dataset
  90. FMCG Spending habits over the pandemic
  91. Looking for a Question Type Classification dataset
  92. 20 years of Manufacturer/Retail price of Men’s footwear
  93. Dataset of Global Technology Adoption Rates
  94. Looking For Real Meeting Transcripts Dataset
  95. Dataset For A Large Archive Of Lyrics  [1,2,3]
  96. Audio dataset with swearing words
  97. A global, georeferenced event dataset on electoral violence with lethal outcomes from 1989 to 2017. [1,]
  98. Looking for Jaundice Dataset for ML model
  99. Looking for social engineering attack detection dataset?
  100. Wound image datasets to train ML model [1]
  101. Seeking for resume and job post dataset
  102. Labelled dataset (sets of images or videos) of human emotions [1,2]
  103. Dataset of specialized phone call transcripts
  104. Looking for Emergency Response Plan Dataset for family Homes, condo buildings and Companies
  105. Looking for Birthday wishes datasets
  106. Desperately in need of national data for real estate [1,2,]
  107. NFL playoffs games stadium attendance dataset
  108. Datasets with original publication dates of novels [1,2]
  109. Annotated Documents with Images Data Dump
  110. Looking for  dataset for “Face Presentation Attack Detection”
  111. Electric vehicle range & performance dataset [1, 2]
  112. Dataset or API with valid postal codes for US, Mexico, and Canada with country, state/province, and city/town [1, 2, 3, 4, 5, 6]
  113. Looking for Data sources regarding Online courses dropout rate, preferably by countries [1,2 ]
  114. Are there dataset for language learning [1, 2]
  115. Corporate Real Estate Data [1,2, 3]
  116. Looking for simple clinical trials datasets [1, 2]
  117. CO2 Emissions By Aircraft (or Aircraft Type) – Climate Analysis Dataset [1,2, 3, 4]
  118. Player Session/playtime dataset from games [1,2]
  119. Data sets that support Data Science (Technology, AI etc) being beneficial to sustainability [1,2]
  120. Datasets of a grocery store [1,2]
  121. Looking for mri breast cancer annotation datasets [1,2]
  122. Looking for free exportable data sets of companies by industry [1,2]
  123. Datasets on Coffee Production/Consumption [1,2]
  124. Video gaming industry datasets – release year, genre, games, titles, global data  [1,2]
  125. Looking for mobile speaker recognition dataset [1,2]
  126. Public DMV vehicle registration data [1,2]
  127. Looking for historical news articles based on industry sector [1,2]
  128. Looking for Historical state wide Divorce dataset [1,2]
  129. Public Big Datasets – with In-Database Analytics [1,2]
  130. Dataset for detecting Apple products (object detection) [1,2]
  131. Help needed to get the American Hospital Association (AHA) datasets (AHA Annual Survey, AHA Financial Database, and AHA IT Survey datasets)  [1, 2]
  132. Looking for help Getting College Football Betting Data [1,2]
  133. 2012-2020 US presidential election results by state/city dataset [1,2, 3]
  134. Looking for datasets of models and images captured using iphone’s LIDAR? [1,2]
  135. Finding Datasets to Age Texts (Newspapers, Books, Anything works) [1, 2, 3]
  136. Looking for cost of living index of some type for US [1,2]
  137. Looking for dataset that recorded historical NFT prices and their price increases, as well as timestamps. [1,2]
  138. Looking for datasets on park boundaries across the country [1, 2, 3]
  139. Looking for medical multimodal datasets [1, 2, 3]
  140. Looking for Scraped Parler Data [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
  141. Looking for Silicon Wafer Demand dataset [1, 2]
  142. Looking for a dataset with the values [Gender – Weight – Height – Health] [1, 2]
  143. Exam questions (mcqs and short answer) datasets? [1, 2]
  144. Canada Botanical Plants API/Database [1, 2, 3]
  145. Looking for a geospatial dataset of birds Migration path [1, 2, 3]
  146. WhatsApp messages dataset/archives [1, 2]
  147.  Dataset of GOOD probiotic microorganisms in the HUMAN gut [1, 2]
  148. Twitter competition to reduce bias in its image cropping [1,2]
  149. Dataset: US overseas military deployments, 1950–2020 [1,2]
  150. Dataset on human clicking on desktop [1,2]
  151. Covid-19 Cough Audio Classification Dataset [1, 2]
  152. 12,000+ known superconductors database [1, 2, 3]
  153. Looking for good dataset related to cyber security for prediction [1, 2]
  154. Where can I find face datasets to classify whether it is a real person or a picture of that person. For authentication purposes? [1,2]
  155. DataSet of Tokyo 2020 (2021) Olympics ( details about the Athletes, the countries they representing, details about events, coaches, genders participating in each event, etc.) [1, 2]
  156. What is your workflow for budget compute on datasets larger than 100GB? [1, 2, 3]
  157. Looking for a dataset that contains information about cryptocurrencies. [1, 2

  158. Looking for a depression dataset [1,2, 3]

  159. Looking for chocolate consumer demographic data [1,2, 3]
  160. Looking for thorough dataset of housing price/tax history [1, 2, 3]
  161. Wallstreetbets data scraping from 01/01/2020 to 01/06/2021 [1, 2]
  162. Retinal Disease Classification Dataset [1, 2]
  163. 400,000 years of CO2 and global temperature data [1, 2, 3]
  164. Looking for datasets on neurodegenerative diseases [1, 2, 3]
  165. Dataset for Job Interviews (either Phone, Online, or Physical) [1,2 ,3]
  166. Firm Cyber Breach Dataset with Firm Identifiers [1, 2, 3]
  167. Wondering how Stock market and Crypto website get the Data from [1, 2, 3, 4, 5]
  168. Looking for a dataset with US tourist injuries, attacks, and/or fatalities when traveling abroad [1, 2, 3]
  169. Looking for Wildfires Database for all countries by year and month? The quantity of wildfires happening, the acreage, things like that, etc.. [1, 2, 3, ]
  170. Looking for a pill vs fake pill image dataset [1, 2, 3, 4, 5, 6, 7]

Cars for sale in Germany from 2011 to 2021

Dataset obtained scraping AutoScout. In the file, you will find features describing 46405 vehicles: mileage, make, model, fuel, gear, offer type, price, horse power, registration year.

Dataset scraped from AutoScout24 with information about new and used cars.

 

Percentage of female students in higher education by subject area

r/dataisbeautiful - [OC] Percentage of female students in higher education by subject area

The data was obtained from the UK government website here , so unfortunately there are some things I’m unaware of regarding data and methodology.

All the passes: A visualization of ~1 million passes from 890 matches played in major football/soccer leagues/cups

  •  Champion League 1999
  • FA Women’s Super League 2018
  • FIFA World Cup 2018, La Liga 2004 – 2020
  • NWSL 2018
  • Premier League 2003 – 2004
  • Women’s World Cup 2019

1million+ football/soccer passes visualization

Data Source: StatsBomb

Global “Urbanity” Dataset (using population mosaics, nighttime lights, & road networks

In this project, the authors  have designed a spatial model which is able to classify urbanity levels globally and with high granularity. As the target geographic support for our model we selected the quadkey grid in level 15, which has cells of approximately 1x1km at the equator.

Dataset:  Here 

Percentage of students with disabilities in higher education by subject area

r/dataisbeautiful - [OC] Percentage of students with disabilities in higher education by subject area

The author obtained the data from the UK Government website, so unfortunately don’t know the methodology or how they collected the data etc. 

The comparison to the general public is  a great idea – according to the Government site, 6% of children, 16% of working-age adults and 45% of Pension-age adults are disabled.

Dataset: here

Arrests for Hate Crimes in NYC by Category, 2017-2020

r/dataisbeautiful - [OC] Arrests for Hate Crimes in NYC by Category, 2017-2020

The Most Successful U.S. Sports Franchises

r/dataisbeautiful - [OC] The Most Successful U.S. Sports Franchises

Data source: sports-reference.com/

Adult cognitive skills (PIAAC literacy and numeracy) by Percentile and by country

According to the author  , this animation depicts adult cognitive skills, as measured by the PIAAC study by OECD. Here, the numeracy and literacy skills have been combined into one. Each frame of the animation shows the xth percentile skill level of each individual country. Thus, you can see which countries have the highest and lowest scores among their bottom performers, median performers, and top performers. So for example, you can see that when the bottom 1st percentile of each country is ranked, Japan is at the top, Russia is second, etc. Looking at the 50th percentile (median) of each country, Japan is top, then Finland, etc.

 

 Programme for the International Assessment of Adult Competencies (PIAAC) is a study by OECD to measure measured literacy, numeracy, and “problem-solving in technology-rich environments” skills for people ages 16 and up. For those of you who are familiar with the school-age children PISA study, this is essentially an adult version of it.

Dataset: PIAAC 

G7 Corporate Tax rate 1980 – 2020

r/dataisbeautiful - G7 Corporate Tax rate 1980 - 2020 [OC]

Dataset: Tax Foundation

 Euro 2020 (played in 2021) Group Stage Predictions Based of a Bayesian Linear Item Response Model

r/dataisbeautiful - [OC] Euro 2020 (played in 2021) Group Stage Predictions Based of a Bayesian Linear Item Response Model

Data Source: UEFA qualifying match data

The model was built in Stan and was inspired by Andrew Gelman’s World Cup model shown here. These plots show posterior probabilities that the team on the y axis will score more goals than the team on the x axis. There is some redundancy of information here (because if I know P(England beats Scotland) then I know P(Scotland beats England) )

Data

Source: Italian National Institute of Statistics (Istituto Nazionale di Statistica)

The 15 most shared musicians on Reddit

r/dataisbeautiful - [OC] The 15 most shared musicians on Reddit

Data source: The authors made a dataset of YouTube and Spotify shares on Reddit. More info available here

Spam vs. Legitimate Email, Average Global Emails per Day

r/dataisbeautiful - Spam vs. Legitimate Email, Average Global Emails per Day [OC]

Data Source: Here. The author  computed the average per day over the June 3 – June 9, 2021 period.

spam vs legitimate email 2021

Falling Fertility, 1800–2016

Data source: Here (go to the “Babies per woman,” “Income,” and “Population” links on that page).

Europe Covid-19 waves

r/dataisbeautiful - Europe Covid-19 waves [OC]

Data Source: Here

Who is going to win EURO 2020? Predicted probabilities pooled together from 18 sources

r/dataisbeautiful - Who is going to win EURO 2020? Predicted probabilities pooled together from 18 sources [OC]

Data source: Here

Population Density of Canada 2020

r/dataisbeautiful - [OC] Population Density of Canada 2020

DataSet:  Gathered from worldpop.org/project

The greater the length of each spike correlates to greater population density.

 

The portion of a country’s population that is fully vaccinated for COVID (as of June 2021) scales with GDP per capita.

r/dataisbeautiful - [OC] The portion of a country's population that is fully vaccinated for COVID (as of June 2021) scales with GDP per capita.

 

Dataset of Chemical reaction equations

1-  chemequations.com/en/

2- Kaggle chemistry section 

3- Reaction datasets 

4- Chemistry datasets

5- BiomedCentral 

 

Maths datasets

1111 2222 3333 Equation Learning 

Datasets for Stata Structural Equation Modeling

Mathematics Dataset

 

SQL Queries Dataset 

SEDE (Stack Exchange Data Explorer) is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction. These pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. Access it here

 

Countries of the world, ranked by population, with the 100 largest cities in the world marked

According to the author:

Each map size is proportional to population, so China takes up about 18-19% of the map space.

Countries with very far-flung territories, such as France (or the USA) will have their maps shrunk to fit all territories. So it is the size of the map rectangle that is proportional to population, not the colored area. Made in R, using data from naturalearthdata.com. Maps drawn with the tmap package, and placed in the image with the gridExtra package. Map colors from the wesanderson package.

Data source: The Economist

What businesses in different countries search for when they look for a marketing agency – “creative” or “SEO”?

r/dataisbeautiful - What businesses in different countries search for when they look for a marketing agency - "creative" or "SEO"? [OC]

Data source: Google Trends

More maps, charts and written analysis on this topic here

Is the economic gap between new and old EU countries closing?

Post image

Data source:  Eurostat

Interactive version so you can click on those circles here

Reddit r/wallstreetbets posts and comments in real-time

  • Posts

  • Comments

  • Beneath adds some useful features for shared data, like the ability to run SQL queries, sync changes in real-time, a Python integration, and monitoring. The monitoring is really useful as it lets you check out the write activity of the scraper (no surprise, WSB is most active when markets are open
  • The scraper (which uses Async PRAW) is open source here

Global NO2 pollution data visualization June 2021

Data Source: SILAM

Shopify App Store Report: 2021

Data source: Marketplace Apps

The Chrome Webstore Report: 2021

Data source: Marketplace Apps

Percentage of Adults with HIV/AIDS in Africa

r/dataisbeautiful - [OC] Percentage of Adults with HIV/AIDS in Africa

Dataset:  All the countries through the UN AIDS organization 

Recorded CDC deaths (2014 – June 16, 2021) from Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)

r/dataisbeautiful - [OC] Recorded CDC deaths (2014 - June 16, 2021) from Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified (R00-R99)

Data Source: combined CDC weekly death counts 2014 – 2019 and CDC weekly death counts 2020-2021

What are the long term gains on cryptocurrencies?

r/dataisbeautiful - What are the long term gains on cryptocurrencies? [OC]

Data Sources: investing.com and coingecko.com

The chart shows the average daily gain in $ if $100 were invested at a date on x-axis. Total gain was divided by the number of days between the day of investing and June 13, 2021. Gains were calculated on average 30-day prices.

Time range: from March 28, 2013, till June 13, 2021

Life Expectancy and Death Probability by Age and Gender

r/dataisbeautiful - [OC] Life Expectancy and Death Probability by Age and Gender

Data source: Here

Daily Coronavirus cases in Canada vs % of Population Vaccinated

r/dataisbeautiful - Daily Coronavirus cases in Canada vs % of Population Vaccinated [OC]

Data Source: Cases Vaccines

Google Playstore Apps with 2.3million app data on Kaggle

Google Playstore dataset is now available with double the data (2.3 Million) android application data and a new attribute stating the scraped date time in Kaggle.

Dataset: Get it here or here

African languages dataset

We have 3000 tribes or more in Africa and in that 3000 we have sub tribes.

1 Introduction to African Languages (Harvard)

2- Languages of the world at Ethnologue

3- Britannica: Nilo-Saharan Laguages

4- Britannica: Khoisan Languages

Daily Temperature of Major Cities Dataset

Daily average temperature values recorded in major cities of the world.

 The dataset is available as separate txt files for each city here. The data is available for research and non-commercial purposes only

 Do stricter gun laws reduce firearms homicides?

r/dataisbeautiful - [OC] Do stricter gun laws reduce firearms homicides?

Data Sources: Guns to CarryEFSGVCDC

According to the author: Looking at non-suicide firearms deaths by state (2019), and then grouping by the Guns to Carry rating (1-5 stars), it seems that stricter gun laws are correlated with fewer firearms homicides. Guns to Carry rates states based on “Gun friendliness” with 1 star being least friendly (California, for example), and 5 stars being most friendly (Wyoming, for example). The ratings aren’t perfect but they include considerations like: Permit required, Registration, Open carry, and Background checks to come up with a rating.

The numbers at the bottom are the average non-suicide deaths calculated within the rating group. Each bar shows the number for the individual state.

Interesting that DC is through the roof despite having strict laws. On the flip side, Maine is very friendly towards gun owners and has a very low homicide rate, despite having the highest ratio of suicides to homicides.

Obviously, lots of things to consider and this is merely a correlation at a basic level. This is a topic that interested me so I figured I’d share my findings. Not attempting to make a policy statement or anything.

Relative frequency of words in economics textbooks vs their frequency in mainstream English (the Google Books corpus)

r/dataisbeautiful - [OC] Relative frequency of words in economics textbooks vs their frequency in mainstream English (the Google Books corpus)

Author

Data Source: Data for word frequency in the Google corpus is from the 2019 Ngram dataset. For details about how to work with this data, see Working With Google Ngrams: A Data-Wrangling Tale.

Data for word frequency in econ textbooks was compiled by myself by scraping words from 43 undergraduate economics textbooks. For details see Deconstructing Econospeak.

Hours per day spent on mobile devices by US adults

r/dataisbeautiful - [OC] Hours per day spent on mobile devices by US adults

Author: nava_7777

Data Source: from eMarketer, as quoted byJon Erlichman

Purpose according to the author: raw textual numbers (like in the original tweet) are hard to compare, particularly the acceleration or deceleration of a trend. Did for myself, but maybe is useful to somebody.

Environmental Impact of Coffee Brewing Methods

r/dataisbeautiful - [OC] Environmental Impact of Coffee Brewing Methods

Author: Coffee_Medley

Data Source: 1 2 3

More according to the author:

  • Measurements and calculations of NG and Electricity used to heat four cups of distilled water by Coffee Medley (6/14/2021)

  • Average coffee bag and pod weight by Coffee Medley (6/14/2021)

Murders in major U.S. Cities: 2019 vs. 2020

r/dataisbeautiful - [OC] Murders in major U.S. Cities: 2019 vs. 2020

Author: datacanbeuseful

Data source: NPR

New Harvard Data (Accidentally) Reveal How Lockdowns Crushed the Working Class While Leaving Elites Unscathed

Data source: Harvard

Support for same-sex marriage by religious group

r/dataisbeautiful - Support for same-sex marriage by religious group [OC]

Data source: PEW

More: Summary of religiously (un)affiliated people’s views on homosexuality, broken down into 18 countries

Daily chance of dying for Americans

r/dataisbeautiful - Daily chance of dying for Americans [OC]

Author: NortherSugarLoaf

Data source: SSA Actuarial Data

Processing: Yearly probability of death is converted to the daily probability and expressed in micromorts. Plotted versus age in years.

Micromort:

According to the author,

A few things to notice: It’s dangerous to be a newborn. The same mortality rates are reached again only in the fifties. However, mortality drops after birth very quickly and the safest age is about ten years old. After experiencing mortality jump in puberty – especially high for boys, mortality increases mostly exponentially with age. Every thirty years of life increase chances of dying about ten times. At 80, chance of dying in a year is about 5.8% for males and 4.3% for females. This mortality difference holds for all ages. The largest disparity is at about twenty three years old when males die at a rate about 2.7 times higher than females.

This data is from before COVID.

Here is the same graph but in linear Y axis scale

Here is the male to female mortality ratio

Mapping Global Carbon Emission Intensity (Dec 2020)

r/dataisbeautiful - [OC] Mapping Global Carbon Emission Intensity (Dec 2020)

Data Source: Copernicus Atmosphere Monitoring Service (CAMS)

Religions with the most Adherents from 1945 – 2010

This image has an empty alt attribute; its file name is image.png

Data source: Zeev Maoz and Errol A. Henderson. 2013. “The World Religion Dataset, 1945-2010: Logic, Estimates, and Trends.” International Interactions, 39: 265-291.

IPO Returns 2000-2020

IPO Returns 2000-2020

IPO Returns 2000-2020

IPO Returns 2000-2020

Data from: iposcoop.com
From the author u/nobjos: The full article on the above analysis can be found here
I have sub r/market_sentiment where I do a comprehensive deep-dive on one investment strategy/topic every week! Some of the author popular articles are
a. Performance of Jim Cramer’s stock picks
b. Performance of buy and sell recommendations made by financial analysts in the last decade
c. Benchmarking performance of Motely fool against SP500
Funko IPO is considered to have the worst first-day return for an IPO in the last two decades.
Out of the top 10 list, only 3 Investment banks had below-average returns.
On average, IPOs did make money for the investor. But the amount is significantly different if you got allocated the IPO at offer price vs you get the IPO at market price.
Baidu.com made a whopping 354% on its listing day. Another interesting observation is 6 out of 10 companies in the list were listed in 200 (just before the dot com crash)

Total number of streams per artist vs. number of Top 200 hits (Spotify Top 200 since 2017)

r/dataisbeautiful - [OC] Total number of streams per artist vs. number of Top 200 hits (Spotify Top 200 since 2017)

Author: blairfix

Data is from the Spotify Top 200 and covers the period from Jan. 1, 2017 to Jun. 9, 2021. You can download my dataset here.

For every artist that appears in the Top 200, I add up their total streams (for all songs) and the total number of songs in the dataset.

For a commentary on the data, see The Half Life of a Spotify Hit.

Number of Miss Americas by U.S. State

r/dataisbeautiful - [OC] Number of Miss Americas by U.S. State

Data Source: Wikipedia

 

The World’s Nuclear Warheads

r/dataisbeautiful - [OC] The World's Nuclear Warheads

Author: academiadvice

Data Source: Federation of American Scientists – status-world-nuclear-forces/

Tools: Excel, Datawrapper, coolors.co/

Check out the FAS site for notes and caveats about their estimates. Governments don’t just print this stuff on their websites. These are evidence-based estimates of tightly-guarded national secrets.

Of particular note – Here’s what the FAS says about North Korea: “After six nuclear tests, including two of 10-20 kilotons and one of more than 150 kilotons, we estimate that North Korea might have produced sufficient fissile material for roughly 40-50 warheads. The number of assembled warheads is unknown, but lower. While we estimate North Korea might have a small number of assembled warheads for medium-range missiles, we have not yet seen evidence that it has developed a functioning warhead that can be delivered at ICBM range.”

The population of Las Vegas over time

r/dataisbeautiful - [OC] The population of Las Vegas over time

Data Source: Wikipedia

 The Alpha to Omega of Wikipedia

r/dataisbeautiful - [OC] The Alpha to Omega of Wikipedia

Author: feldesque

Data Source: The wikipediatrend package in R

Code published here

Glacial Inter-glacial cycles over the past 450000 years

Source:  geology.utah.gov/

Global temperature change from 1850-2020

r/dataisbeautiful - Global temperature change from 1850-2020

Worth noting these are largely driven by changes in the amount of solar radiation reaching us due to variations in earth’s orbit

Top Companies Contributing to Open Source – 2011/2021

Source and links

The author used several sources for this video and article. The first, for the video, is GitHub Archive & CodersRank. For the analysis of the OSCI index data, the author used opensourceindex.io

Crime Rates in the US: 1960-2021

r/dataisbeautiful - [OC] Crime Rates in the US: 1960-2021

Data source: Here

Here

2021 is straight projections, must be taken with a grain of salt. However, the assumption of continuous rise of murder rate is not a bad one based on recent news reports, such as: here

In a property crime, a victim’s property is stolen or destroyed, without the use or threat of force against the victim. Property crimes include burglary and theft as well as vandalism and arson.

A network visualization of privacy research (83k nodes, 462k edges)

r/dataisbeautiful - [OC] A network visualisation of privacy research (83k nodes, 462k edges)

Author: FvDijk

This image was generated for my research mapping the privacy research field. The visual is a combination of network visualisation and manual adding of the labels.

The data was gathered from Scopus, a high-quality academic publication database, and the visualisation was created with Gephi. The initial dataset held ~120k publications and over 3 million references, from which we selected only the papers and references in the field.

The labels were assigned by manually identifying clusters and two independent raters assigning names from a random sample of publications, with a 94% match between raters.

The scripts used are available on Github

The full paper can be found on the author’s website:

 

GDP (at purchasing power parity) per capita in international dollars

r/dataisbeautiful - [OC] GDP (at purchasing power parity) per capita in international dollars

Author:  Simaniac

Data source: IMF

Phone Call Anxiety dataset for Millennials and Gen Z

r/dataisbeautiful - Phone Call Anxiety is a real thing for Millennials and Gen Z [OC]

Author: /u/CynicalScyntist

This is a randomized experiment the author  conducted with 450 people on Amazon MTurk. Each person was randomly assigned to one of three writing activities in which they either (a) described their phone, (b) described what they’d do if they received a call from someone they know, or (c) describe what they’d do if they received a call from an unknown number. Pictures of an iPhone with a corresponding call screen were displayed above the text box (blank, “Incoming Call,” or “Unknown”). Participants then rated their anxiety on a 1-4 scale.

Dataset: Here

Source Article

Hate Crime Statistics in New York State 2019-2021

Hate Crime Statistics NYC 2019-2021

Continue reading “Data Sciences – Top 400 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes”