Data Sciences – Top 400 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes
You can translate the content of this page by selecting a language in the select box.
Data Sciences – Top 400 Open Datasets – Data Visualization – Data Analytics – Big Data – Data Lakes
Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains.
A dataset is a collection of data, usually presented in tabular form. Good datasets for Data Science and Machine Learning are typically those that are well-structured (easy to read and understand) and large enough to provide enough data points to train a model. The best datasets are often those that are open and freely available – such as the popular Iris dataset. However, there are also many commercial datasets available for purchase. In general, good datasets for Data Science and Machine Learning should be:
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
Well-structured
Large enough to provide enough data points
Open and freely available whenever possible
In this blog, we are going to provide popular open source and public data sets, data visualization, data analytics and data lakes.
Fertility rates all over the world are steadily declining
Yes, fertility rates have been declining globally in recent decades. There are several factors that contribute to this trend, including increased access to education and employment opportunities for women, improved access to family planning and birth control, and changes in societal attitudes towards having children. However, the rate of decline varies significantly by country and region, with some countries experiencing more dramatic declines than others.
3 largest global payment networks – measured by total payment volume each year ($B)
Stocks Vs Bonds 2022
Most expensive football transfers
11 developing countries with higher life expectancy than the United States
Healthcare expenditure per capita vs life expectancy years
1.2% of adults own 47.8% of world’s wealth
How to Mathematically Win at Rock Paper Scissors
Researchers from IBM, MIT and Harvard Announced The Release Of DARPA “Common Sense AI” Dataset Along With Two Machine Learning Models At ICML 2021
Building machines that can make decisions based on common sense is no easy feat. A machine must be able to do more than merely find patterns in data; it also needs a way of interpreting the intentions and beliefs behind people’s choices.
At the 2021 International Conference on Machine Learning (ICML), Researchers from IBM, MIT, and Harvard University have come together to release a DARPA “Common Sense AI” dataset for benchmarking AI intuition. They are also releasing two machine learning models that represent different approaches to the problem that relies on testing techniques psychologists use to study infants’ behavior to accelerate the development of AI exhibiting common sense.
Researchers from IBM, MIT and Harvard Announced The Release Of DARPA “Common Sense AI” Dataset Along With Two Machine Learning Models At ICML 2021
Building machines that can make decisions based on common sense is no easy feat. A machine must be able to do more than merely find patterns in data; it also needs a way of interpreting the intentions and beliefs behind people’s choices.
At the 2021 International Conference on Machine Learning (ICML), Researchers from IBM, MIT, and Harvard University have come together to release a DARPA “Common Sense AI” dataset for benchmarking AI intuition. They are also releasing two machine learning models that represent different approaches to the problem that relies on testing techniques psychologists use to study infants’ behavior to accelerate the development of AI exhibiting common sense.
The University of Chicago Project on Security and Threats presents the updated and expanded Database on Suicide Attacks (DSAT), which now links to Uppsala Conflict Data Program data on armed conflicts and includes a new dataset measuring the alliance and rivalry relationships among militant groups with connections to suicide attack groups. Access it here.
The HRRR is a NOAA real-time 3-km resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model, initialized by 3km grids with 3km radar assimilation. Radar data is assimilated in the HRRR every 15 min over a 1-h period adding further detail to that provided by the hourly data assimilation from the 13km radar-enhanced Rapid Refresh.
When will computers replace humans?
This chart is essentially measuring “How good is a human at a computers’ area of strength”.. meanwhile computers simply can not compete in human areas of strength.
The GDC Data Portal is a robust data-driven platform that allows cancer researchers and bioinformaticians to search and download cancer data for analysis.
The Cancer Genome Atlas (TCGA), a collaboration between the National Cancer Institute (NCI) and National Human Genome Research Institute (NHGRI), aims to generate comprehensive, multi-dimensional maps of the key genomic changes in major types and subtypes of cancer.
The Therapeutically Applicable Research to Generate Effective Treatments (TARGET) program applies a comprehensive genomic approach to determine molecular changes that drive childhood cancers. The goal of the program is to use data to guide the development of effective, less toxic therapies. TARGET is organized into a collaborative network of disease-specific project teams. TARGET projects provide comprehensive molecular characterization to determine the genetic changes that drive the initiation and progression of childhood cancers. The dataset contains open Clinical Supplement, Biospecimen Supplement, RNA-Seq Gene Expression Quantification, miRNA-Seq Isoform Expression Quantification, miRNA-Seq miRNA Expression Quantification data from Genomic Data Commons (GDC), and open data from GDC Legacy Archive. Access it here.
The Genome Aggregation Database (gnomAD) is a resource developed by an international coalition of investigators that aggregates and harmonizes both exome and genome data from a wide range of large-scale human sequencing projects. The summary data provided here are released for the benefit of the wider scientific community without restriction on use. Downloads
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Access it here.
The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details.
This dataset contains interactions between drugs and targets collected from DrugBank, KEGG Drug, DCDB, and Matador. It was originally collected by Perlman et al. It contains 315 drugs, 250 targets, 1,306 drug-target interactions, 5 types of drug-drug similarities, and 3 types of target-target similarities. Drug-drug similarities include Chemical-based, Ligand-based, Expression-based, Side-effect-based, and Annotation-based similarities. Target-target similarities include Sequence-based, Protein-protein interaction network-based, and Gene Ontology-based similarities. The original task on the dataset is to predict new interactions between drugs and targets based on different types of similarities in the network. Download link
PharmGKB data and knowledge is available as downloads. It is often critical to check with their curators at feedback@pharmgkb.org before embarking on a large project using these data, to be sure that the files and data they make available are being interpreted correctly. PharmGKB generally does NOT need to be a co-author on such analyses; They just want to make sure that there is a correct understanding of our data before lots of resources are spent.
The dataset contains open RNA-Seq Gene Expression Quantification data and controlled WGS/WXS/RNA-Seq Aligned Reads, WXS Annotated Somatic Mutation, WXS Raw Somatic Mutation, and RNA-Seq Splice Junction Quantification. Documentation
This dataset contains soil infrared spectral data and paired soil property reference measurements for georeferenced soil samples that were collected through the Africa Soil Information Service (AfSIS) project, which lasted from 2009 through 2018. Documentation
DAiSEE is the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration “in the wild”. The dataset has four levels of labels namely – very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. Download it here.
NatureServe Explorer provides conservation status, taxonomy, distribution, and life history information for more than 95,000 plants and animals in the United States and Canada, and more than 10,000 vegetation communities and ecological systems in the Western Hemisphere.
The data available through NatureServe Explorer represents data managed in the NatureServe Central Databases. These databases are dynamic, being continually enhanced and refined through the input of hundreds of natural heritage program scientists and other collaborators. NatureServe Explorer is updated from these central databases to reflect information from new field surveys, the latest taxonomic treatments and other scientific publications, and new conservation status assessments. Explore Data here
FlightAware.com has data but you need to pay for a full dataset.
The anyflights package supplies a set of functions to generate air travel data (and data packages!) similar to nycflights13. With a user-defined year and airport, the anyflights function will grab data on:
flights: all flights that departed a given airport in a given year and month
weather: hourly meterological data for a given airport in a given year and month
airports: airport names, FAA codes, and locations
airlines: translation between two letter carrier (airline) codes and names
planes: construction information about each plane found in flights
The U.S. Department of Transportation’s (DOT) Bureau of Transportation Statistics (BTS) tracks the on-time performance of domestic flights operated by large air carriers. Summary information on the number of on-time, delayed, canceled and diverted flights appears in DOT’s monthly Air Travel Consumer Report, published about 30 days after the month’s end, as well as in summary tables posted on this website. BTS began collecting details on the causes of flight delays in June 2003. Summary statistics and raw data are made available to the public at the time the Air Travel Consumer Report is released. Access it here
Flightera.net seems to have a lot of good data for free. It has in-depth data on flights and doesn’t seem limited by date. I can’t speak on the validity of the data though.
flightradar24.com has lots of data, also historically, they might be willing to help you get it in a nice format.
Researchers from IBM, MIT and Harvard Announced The Release Of DARPA “Common Sense AI” Dataset Along With Two Machine Learning Models At ICML 2021
Building machines that can make decisions based on common sense is no easy feat. A machine must be able to do more than merely find patterns in data; it also needs a way of interpreting the intentions and beliefs behind people’s choices.
At the 2021 International Conference on Machine Learning (ICML), Researchers from IBM, MIT, and Harvard University have come together to release a DARPA “Common Sense AI” dataset for benchmarking AI intuition. They are also releasing two machine learning models that represent different approaches to the problem that relies on testing techniques psychologists use to study infants’ behavior to accelerate the development of AI exhibiting common sense.
The University of Chicago Project on Security and Threats presents the updated and expanded Database on Suicide Attacks (DSAT), which now links to Uppsala Conflict Data Program data on armed conflicts and includes a new dataset measuring the alliance and rivalry relationships among militant groups with connections to suicide attack groups. Access it here.
The HRRR is a NOAA real-time 3-km resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model, initialized by 3km grids with 3km radar assimilation. Radar data is assimilated in the HRRR every 15 min over a 1-h period adding further detail to that provided by the hourly data assimilation from the 13km radar-enhanced Rapid Refresh.
The GDC Data Portal is a robust data-driven platform that allows cancer researchers and bioinformaticians to search and download cancer data for analysis.
The Cancer Genome Atlas (TCGA), a collaboration between the National Cancer Institute (NCI) and National Human Genome Research Institute (NHGRI), aims to generate comprehensive, multi-dimensional maps of the key genomic changes in major types and subtypes of cancer.
The Therapeutically Applicable Research to Generate Effective Treatments (TARGET) program applies a comprehensive genomic approach to determine molecular changes that drive childhood cancers. The goal of the program is to use data to guide the development of effective, less toxic therapies. TARGET is organized into a collaborative network of disease-specific project teams. TARGET projects provide comprehensive molecular characterization to determine the genetic changes that drive the initiation and progression of childhood cancers. The dataset contains open Clinical Supplement, Biospecimen Supplement, RNA-Seq Gene Expression Quantification, miRNA-Seq Isoform Expression Quantification, miRNA-Seq miRNA Expression Quantification data from Genomic Data Commons (GDC), and open data from GDC Legacy Archive. Access it here.
The Genome Aggregation Database (gnomAD) is a resource developed by an international coalition of investigators that aggregates and harmonizes both exome and genome data from a wide range of large-scale human sequencing projects. The summary data provided here are released for the benefit of the wider scientific community without restriction on use. Downloads
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. Access it here.
The Pubmed Diabetes dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. The citation network consists of 44338 links. Each publication in the dataset is described by a TF/IDF weighted word vector from a dictionary which consists of 500 unique words. The README file in the dataset provides more details.
This dataset contains interactions between drugs and targets collected from DrugBank, KEGG Drug, DCDB, and Matador. It was originally collected by Perlman et al. It contains 315 drugs, 250 targets, 1,306 drug-target interactions, 5 types of drug-drug similarities, and 3 types of target-target similarities. Drug-drug similarities include Chemical-based, Ligand-based, Expression-based, Side-effect-based, and Annotation-based similarities. Target-target similarities include Sequence-based, Protein-protein interaction network-based, and Gene Ontology-based similarities. The original task on the dataset is to predict new interactions between drugs and targets based on different types of similarities in the network. Download link
PharmGKB data and knowledge is available as downloads. It is often critical to check with their curators at feedback@pharmgkb.org before embarking on a large project using these data, to be sure that the files and data they make available are being interpreted correctly. PharmGKB generally does NOT need to be a co-author on such analyses; They just want to make sure that there is a correct understanding of our data before lots of resources are spent.
The dataset contains open RNA-Seq Gene Expression Quantification data and controlled WGS/WXS/RNA-Seq Aligned Reads, WXS Annotated Somatic Mutation, WXS Raw Somatic Mutation, and RNA-Seq Splice Junction Quantification. Documentation
This dataset contains soil infrared spectral data and paired soil property reference measurements for georeferenced soil samples that were collected through the Africa Soil Information Service (AfSIS) project, which lasted from 2009 through 2018. Documentation
DAiSEE is the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration “in the wild”. The dataset has four levels of labels namely – very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. Download it here.
NatureServe Explorer provides conservation status, taxonomy, distribution, and life history information for more than 95,000 plants and animals in the United States and Canada, and more than 10,000 vegetation communities and ecological systems in the Western Hemisphere.
The data available through NatureServe Explorer represents data managed in the NatureServe Central Databases. These databases are dynamic, being continually enhanced and refined through the input of hundreds of natural heritage program scientists and other collaborators. NatureServe Explorer is updated from these central databases to reflect information from new field surveys, the latest taxonomic treatments and other scientific publications, and new conservation status assessments. Explore Data here
FlightAware.com has data but you need to pay for a full dataset.
The anyflights package supplies a set of functions to generate air travel data (and data packages!) similar to nycflights13. With a user-defined year and airport, the anyflights function will grab data on:
flights: all flights that departed a given airport in a given year and month
weather: hourly meterological data for a given airport in a given year and month
airports: airport names, FAA codes, and locations
airlines: translation between two letter carrier (airline) codes and names
planes: construction information about each plane found in flights
The U.S. Department of Transportation’s (DOT) Bureau of Transportation Statistics (BTS) tracks the on-time performance of domestic flights operated by large air carriers. Summary information on the number of on-time, delayed, canceled and diverted flights appears in DOT’s monthly Air Travel Consumer Report, published about 30 days after the month’s end, as well as in summary tables posted on this website. BTS began collecting details on the causes of flight delays in June 2003. Summary statistics and raw data are made available to the public at the time the Air Travel Consumer Report is released. Access it here
Flightera.net seems to have a lot of good data for free. It has in-depth data on flights and doesn’t seem limited by date. I can’t speak on the validity of the data though.
flightradar24.com has lots of data, also historically, they might be willing to help you get it in a nice format.
Measurements of the normal (i.e. non-superconducting) state magnetoresistance (change in resistance with magnetic field) in several single crystalline samples of copper-oxide high-temperature superconductors. The measurements were performed predominantly at the High Field Magnet Laboratory (HFML) in Nijmegen, the Netherlands, and the Pulsed Magnetic Field Facility (LNCMI-T) in Toulouse, France. Complete Zip Download
Collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. Full Dataset.
Child mortality numbers caused by malaria by country
Number of deaths of infants, neonatal, and children up to 4 years old caused by malaria by country from 2000 to 2015. Originator: World Health Organization
The dataset will give anyone the opportunity to train and test models of semantic equivalence, based on actual Quora data. 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair. Access it here.
MIMIC Critical Care Database
MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising deidentified health data associated with ~60,000 intensive care unit admissions. It includes demographics, vital signs, laboratory tests, medications, and more. Access it here.
Data.Gov: The home of the U.S. Government’s open data
Here you will find data, tools, and resources to conduct research, develop web and mobile applications, design data visualizations, and more. Search over 280000 Datasets.
Art that does not attempt to represent an accurate depiction of a visual reality but instead use shapes, colours, forms and gestural marks to achieve its effect
5000+ classical abstract art here, real artists with annotation. You can download them in very high resolution, however you would have to crawl them first with this scraper.
Interactive map of indigenous people around the world
Native-Land.ca is a website run by the nonprofit organization Native Land Digital. Access it here.
I took the data from IHME’s Global Burden of Disease 2019 study (2019 all-ages prevalence of drug use disorders among both men and women for all countries and territories) and plotted it using R.
Also, what is going on in the US exactly? 3.3% of the population there is addicted and it’s the worst rate in the world.
File POP/1-1: Total population (both sexes combined) by region, subregion and country, annually for 1950-2100 (thousands)Medium fertility variant, 2020 – 2100
Conducted by the Federal Highway Administration (FHWA), the NHTS is the authoritative source on the travel behavior of the American public. It is the only source of national data that allows one to analyze trends in personal and household travel. It includes daily non-commercial travel by all modes, including characteristics of the people traveling, their household, and their vehicles. Access it here.
Statistics and data about the National Travel Survey, based on a household survey to monitor trends in personal travel.
The survey collects information on how, why, when and where people travel as well as factors affecting travel (e.g. car availability and driving license holding).
NeTEx is the official format for public transport data in Norway and is the most complete in terms of available data. GTFS is a downstream format with only a limited subset of the total data, but we generate datasets for it anyway since GTFS can be easier to use and has a wider distribution among international public transport solutions. GTFS sets come in “extended” and “basic” versions. Access here.
A subset of the field data collected on temporary NFI plots can be downloaded in Excel format from this web site. The file includes a Read_me sheet and a sheet with field data from temporary plots on forest land1 collected from 2007 to 2019. Note that plots located on boundaries (for example boundaries between forest stands, or different land use classes) are not included in the dataset. The dataset is primarily intended to be used as reference data and validation data in remote sensing applications. It cannot be used to derive estimates of totals or mean values for a geographic area of any size. Download the dataset here
Large data sets from finance and economics applicable in related fields studying the human condition
CIA: The world Factbook provides basic intelligence on the history, people, government, economy, energy, geography, environment, communications, transportation, military, terrorism, and transnational issues for 266 world entities.
Consumer Price Index: The Consumer Price Index (CPI) is a measure of the average change over time in the prices paid by urban consumers for a market basket of consumer goods and services. Indexes are available for the U.S. and various geographic areas. Average price data for select utility, automotive fuel, and food items are also available.
International Historical Statistics is a compendium of national and international socio-economic data from 1750 to 2010. Data are available in both Excel and PDF tabular formats. IHS is structured in three broad geographical divisions and ten themes: Africa / Asia / Oceania; The Americas and Europe. The database is structured in ten categories: Population and vital statistics; Labour force; Agriculture; Industry; External trade; Transport and communications; Finance; Commodity prices; Education and National accounts. Access here
World Input-Output Tables and underlying data. World Input-Output Tables and underlying data, covering 43 countries, and a model for the rest of the world for the period 2000-2014. Data for 56 sectors are classified according to the International Standard Industrial Classification revision 4 (ISIC Rev. 4).
Data: Real and PPP-adjusted GDP in US millions of dollars, national accounts (household consumption, investment, government consumption, exports and imports), exchange rates and population figures.
COW seeks to facilitate the collection, dissemination, and use of accurate and reliable quantitative data in international relations. Key principles of the project include a commitment to standard scientific principles of replication, data reliability, documentation, review, and the transparency of data collection procedures
Data: Total national trade and bilateral trade flows between states. Total imports and exports of each country in current US millions of dollars and bilateral flows in current US millions of dollars
Geographical coverage: Single countries around the world
The WTO provides quantitative information in relation to economic and trade policy issues. Its data-bases and publications provide access to data on trade flows, tariffs, non-tariff measures (NTMs) and trade in value added.
The Subaru-Mitaka-Okayama-Kiso Archive, holds about 15 TB of astronomical data from facilities run by the National Astronomical Observatory of Japan. All data becomes publicly available after an embargo period of 12-24 months (to give the original observers time to publish their papers).
Graph Datasets
Web crawl graph with 3.5 billion web pages and 128 billion hyperlinks
Many web and social graphs with up to 95 billion edges. While this data collection seems to be very comprehensive, it is not trivially accessible without external tool.
The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from many product types (domains). Some domains (books and dvds) have hundreds of thousands of reviews. Others (musical instruments) have only a few hundred. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed. Access it here.
Supported by Google Jigsaw, the GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, themes, sources, emotions, counts, quotes, images and events driving our global society every second of every day, creating a free open platform for computing on the entire world.
This dataset represents a snapshot of the Yahoo! Music community’s preferences for various musical artists. The dataset contains over ten million ratings of musical artists given by Yahoo! Music users over the course of a one month period sometime prior to March 2004. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms. The dataset may serve as a testbed for matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 423 MB.
This dataset contains a small sample of the Yahoo! Movies community’s preferences for various movies, rated on a scale from A+ to F. Users are represented as meaningless anonymous numbers so that no identifying information is revealed. The dataset also contains a large amount of descriptive information about many movies released prior to November 2003, including cast, crew, synopsis, genre, average ratings, awards, etc. The dataset may be used by researchers to validate recommender systems or collaborative filtering algorithms, including hybrid content and collaborative filtering algorithms. The dataset may serve as a testbed for relational learning and data mining algorithms as well as matrix and graph algorithms including PCA and clustering algorithms. The size of this dataset is 23 MB.
The dataset is a collection of 964 hours (22K videos) of news broadcast videos that appeared on Yahoo news website’s properties, e.g., World News, US News, Sports, Finance, and a mobile application during August 2017. The videos were either part of an article or displayed standalone in a news property. Many of the videos served in this platform lack important metadata, such as an exhaustive list of topics associated with the video. We label each of the videos in the dataset using a collection of 336 tags based on a news taxonomy designed by in-house editors. In the taxonomy, the closer the tag is to the root, the more generic (topically) it is.
The Internet Archive is making an 80 TB web crawl available for research
The TREC conference made the ClueWeb09 [3] dataset available a few years back. You’ll have to sign an agreement and pay a nontrivial fee (up to $610) to cover the sneakernet data transfer. The data is about 5 TB compressed.
ClueWeb12 is now available, as are the Freebase annotations, FACC1
CNetS at Indiana University makes a 2.5 TB click dataset available
ICWSM made a large corpus of blog posts available for their 2011 conference. You’ll have to register (an actual form, not an online form), but it’s free. It’s about 2.1 TB compressed. The dataset consists of over 386 million blog posts, news articles, classifieds, forum posts and social media content between January 13th and February 14th. It spans events such as the Tunisian revolution and the Egyptian protests (see http://en.wikipedia.org/wiki/January_2011 for a more detailed list of events spanning the dataset’s time period). Access it here
The Yahoo News Feed dataset is 1.5 TB compressed, 13.5 TB uncompressed
The Proteome Commons makes several large datasets available. The largest, the Personal Genome Project , is 1.1 TB in size. There are several others over 100 GB in size.
The MOBIO dataset is about 135 GB of video and audio data
The Yahoo! Webscope program makes several 1 GB+ datasets available to academic researchers, including an 83 GB data set of Flickr image features and the dataset used for the 2020 KDD Cup , from Yahoo! Music, which is a bit over 1 GB.
Freebase makes regular data dumps available. The largest is their Quad dump , which is about 3.6 GB compressed.
The Research and Innovative Technology Administration (RITA) has made available a dataset about the on-time performance of domestic flights operated by large carriers. The ASA compressed this dataset and makes it available for download.
The wiki-links data made available by Google is about 1.75 GB total.
Google Research released a large 24GB n-gram data set back in 2006 based on processing 10^12 words of text and published counts of all sequences up to 5 words in length.
These data are intended to be used by researchers and other professionals working in power and energy related areas and requiring data for design, development, test, and validation purposes. These data should not be used for commercial purposes.
A dataset and open-ended challenge for music recommendation research ( RecSys Challenge 2018). Sampled from the over 4 billion public playlists on Spotify, this dataset of 1 million playlists consist of over 2 million unique tracks by nearly 300,000 artists, and represents the largest public dataset of music playlists in the world. Access it here
How much each of 20 most popular artists earns from Spotify.
How much each of 20 most popular artists earns from Spotify.
Needless to say, the United States absolutely dominates this list more than any other country. 9 of the top 10 are Americans, you’d have to combine the next 5 countries after the US to match their output of 33 among the top 80, and you’d have to combined every other country not named China on this graph to equal the USA.
To break things down based on region:
– The Americas has 34 individuals on this list with USA (33) and Mexico (1)
– Asia-Pacific has 28 individuals on this list with China (14), India (5), Hong Kong (4), Japan (3), and Australia (2)
– Europe has 18 individuals on this list with France (5), Russia (5), Germany (3), Italy (2), UK (1), Ireland (1), and Spain (1)
The National Health and Nutrition Examination Survey (NHANES) is conducted every two years by the National Center for Health Statistics and funded by the Centers for Disease Control and Prevention. The survey measures obesity rates among people ages 2 and older. Find the latest national data and trends over time, including by age group, sex, and race. Data are available through 2017-2018, with the exception of obesity rates for children by race, which are available through 2015-2016. Access here
NCEI first developed the Global Historical Climatology Network-Monthly (GHCN-M) temperature dataset in the early 1990s. Subsequent iterations include version 2 in 1997, version 3 in May 2011, and version 4 in October 2018.
Are there any places where the climate is recently getting colder?
Human development index (HDI) by world subdivisions
The Human Development Index (HDI) is a statistic composite index of life expectancy, education (mean years of schooling completed and expected years of schooling upon entering the education system), and per capita income indicators, which are used to rank countries into four tiers of human development.
Numbers like these are a quick reminder that not every athlete is LeBron James or Roger Federer who can play their sport at such high levels for their entire young adulthood while becoming billionaires in the process. Many careers are short lived and end abruptly while the athlete is still very young and some don’t really have a plan B.
NFL being at the bottom here doesn’t surprise me though as most positions (with the exception of QB and kicker) in US Football is lowkey bodily suicide.
The data comes from the Global Power Plant Database. The Global Power Plant Database is a comprehensive, open source database of power plants around the world. It centralizes power plant data to make it easier to navigate, compare and draw insights for one’s own analysis. The database covers approximately 30,000 power plants from 164 countries and includes thermal plants (e.g. coal, gas, oil, nuclear, biomass, waste, geothermal) and renewables (e.g. hydro, wind, solar). Each power plant is geolocated and entries contain information on plant capacity, generation, ownership, and fuel type. It will be continuously updated as data becomes available.
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images.
The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. Access it here.
MMID is a large-scale, massively multilingual dataset of images paired with the words they represent collected at the University of Pennsylvania. The dataset is doubly parallel: for each language, words are stored parallel to images that represent the word, and parallel to the word’s translation into English (and corresponding images.) . Dcumentation.
HDI is calculated by the UN every year to measure a country’s development using average life expectancy, education level, and gross national income per capita (PPP). The EU has a collective HDI of 0.911.
Using machine learning methods to group NFL quarterbacks into archetypes
Data Source:
Data collected from a series of rushing and passing statistics for NFL Quarterbacks from 2015-2020 and performed a machine learning algorithm called clustering, which automatically sorts observations into groups based on shared common characteristics using a mathematical “distance metric.”
The idea was to use machine learning to determine NFL Quarterback Archetype to agnostically determine which quarterbacks were truly “mobile” quarterbacks, and which were “pocket passers” that relied more on passing. I used a number of metrics in my actual clustering analysis, but they can be effectively summarized across two dimensions: passing and rushing, which can be further roughly summarized across two metrics: passer rating and rushing yards per year. Plotting the quarterbacks along these dimensions and plotting the groups chosen by the clustering methodology shows how cleanly the methodology selected the groups.
Read this blog article on the process for more information if you’re interested, or just check out this blog in general if you found this interesting!
Intraday Stock Data (1 min) – S&P 500 – 2008-21: 12 years of 1 minute bars for data science / machine learning.
Granular stock bar data for research is difficult to find and expensive to buy. The author has compiled this library from a variety of sources and is making it available for free.
One compressed CSV file with 9 columns and 2.07 million rows worth of 1 minute SPY bars. Access it here
Datasets: A live version of the vaccination dataset and documentation are available in a public GitHub repository here. These data can be downloaded in CSV and JSON formats. PDF.
Learn how to create, maintain, and contribute to a long-living dataset that will update itself automatically across projects, using git and DVC as versioning systems, and DAGsHub as a host for the datasets.
Courtesy of Google’s Project Sunroof: This dataset essentially describes the rooftop solar potential for different regions, based on Google’s analysis of Google Maps data to find rooftops where solar would work, and aggregate those into region-wide statistics.
It comes in a couple of aggregation flavors – by census tract , where the region name is the census tract id, and by postal code , where the name is the postal code. Each also contains latitude/longitude bounding boxes and averages, so that you can download based on that, and you should be able to do custom larger aggregations using those, if you’d like.
A large dataset aimed at teaching AI to code, it consists of some 14M code samples and about 500M lines of code in more than 55 different programming languages, from modern ones like C++, Java, Python, and Go to legacy languages like COBOL, Pascal, and FORTRAN.
When the whole country is double vaccinated, the value will be 200 doses per 100 population. At the moment the UK is like 85, which is because ~70% of the population has had at least one dose and ~15% of the population (which is a subset of that 70%) have had two. Hence ~30% are currently unprotected – myself included until Sunday.
According to the author of the source data: “For the 1918 Spanish Flu, the data was collected by knowing that the total counts were 500M cases and 50M deaths, and then taking a fraction of that per day based on the area of this graph image:” – the graph is used is here:
Visualización y conjunto de datos de comparación de enfermedades agregadas
Data source: trends.google.com Trending topics from 2010 to 2019 were taken from Google’s annual Year in Search summary 2010-2029
The full, ~11 minute video covering the whole 2010s decade is available here at youtu.be/xm91jBeN4oo
Google Trends provides weekly relative search interest for every search term, along with the interest by state. Using these two datasets for each term, we’re able to calculate the relative search interest for every state for a particular week. Linear interpolation was used to calculate the daily search interest.
From the author: I started with data on roads from naturalearth.com, which also includes some ferry lines. I then calculated the fastest routes (assuming a speed of 90 km/h on roads, and 35 km/h on boat) between each pair of 45 European capitals. The animation visualizes these routes, with brighter lines for roads that are more frequently “traveled”.
In reality these are of course not the most traveled roads, since people don’t go from all capitals to all other capitals in equal measure. But I thought it would be fun to visualize all the possible connections.
The model is also very simple, and does not take into account varying speed limits, road conditions, congestion, border checks and so on. It is just for fun!
In order to keep the file size manageable, the animation only shows every tenth frame.
Is Russia, Turkey or country X really part of Europe? That of course depends on the definition, but it was more fun to include them than to exclude them! The Vatican is however not included since it would just be the same as the Rome routes. And, unfortunately, Nicosia on Cyprus is not included to due an error on my behalf. It should be!
2) This dataset comprises of more than 800 pokemons belonging up to 8 generations.
Using this dataset have been fun for me. I used it to create a mosaic of pokemons taking image as reference. You can find it here and it’s free to use: Couple Mosaic (powered by Pokemons)
Here is the data type information in the file:
Name: Pokemon Name
Type: Type of Pokemon like Grass / Fire / Water etc..,.
ETL pipeline for Facebook’s research project to provide detailed large-scale demographics data. It’s broken down in roughly 30×30 m grid cells and provides info on groups by age and gender.
The GISS Surface Temperature Analysis ver. 4 (GISTEMP v4) is an estimate of global surface temperature change. Graphs and tables are updated around the middle of every month using current data files from NOAA GHCN v4 (meteorological stations) and ERSST v5 (ocean areas), combined as described in our publications Hansen et al. (2010) and Lenssen et al. (2019).
Buying a chocolate bar? There are seemingly hundreds to choose from, but its just the illusion of choice. They pretty much all come from Mars, Nestlé, or Mondelēz (which owns Cadbury).
Criteria for choosing a dictionary: – No proper nouns – “Official” source if available – Inclusion of inflected forms – Among two lists, the largest was fancied – No or very rare abbreviations if possible- but hard to detect in unknown languages and across hundreds of thousands of words.
The author found this dataset in a more accessible format upon searching for the keyword “CDPB” (Carcinogenic Potency Database) in the National Library of Medicine Catalog. Check out this parent website for the data source and dataset description. The dataset referenced in OP’s post concerns liver specific carcinogens, which are marked by the “liv” keyword as described in the dataset description’s Tissue Codes section.
DataSet of Tokyo 2020 (2021) Olympics ( details about the Athletes, the countries they representing, details about events, coaches, genders participating in each event, etc.) [1, 2]
Looking for Wildfires Database for all countries by year and month? The quantity of wildfires happening, the acreage, things like that, etc.. [1, 2, 3, ]
Looking for a pill vs fake pill image dataset [1, 2, 3, 4, 5, 6, 7]
In this project, the authors have designed a spatial model which is able to classify urbanity levels globally and with high granularity. As the target geographic support for our model we selected the quadkey grid in level 15, which has cells of approximately 1x1km at the equator.
The author obtained the data from the UK Government website, so unfortunately don’t know the methodology or how they collected the data etc.
The comparison to the general public is a great idea – according to the Government site, 6% of children, 16% of working-age adults and 45% of Pension-age adults are disabled.
According to the author , this animation depicts adult cognitive skills, as measured by the PIAAC study by OECD. Here, the numeracy and literacy skills have been combined into one. Each frame of the animation shows the xth percentile skill level of each individual country. Thus, you can see which countries have the highest and lowest scores among their bottom performers, median performers, and top performers. So for example, you can see that when the bottom 1st percentile of each country is ranked, Japan is at the top, Russia is second, etc. Looking at the 50th percentile (median) of each country, Japan is top, then Finland, etc.
Programme for the International Assessment of Adult Competencies (PIAAC)is a study by OECD to measure measured literacy, numeracy, and “problem-solving in technology-rich environments” skills for people ages 16 and up. For those of you who are familiar with the school-age children PISA study, this is essentially an adult version of it.
The model was built in Stan and was inspired by Andrew Gelman’s World Cup model shown here. These plots show posterior probabilities that the team on the y axis will score more goals than the team on the x axis. There is some redundancy of information here (because if I know P(England beats Scotland) then I know P(Scotland beats England) )
SEDE (Stack Exchange Data Explorer) is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction. These pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. Access it here
Each map size is proportional to population, so China takes up about 18-19% of the map space.
Countries with very far-flung territories, such as France (or the USA) will have their maps shrunk to fit all territories. So it is the size of the map rectangle that is proportional to population, not the colored area. Made in R, using data from naturalearthdata.com. Maps drawn with the tmap package, and placed in the image with the gridExtra package. Map colors from the wesanderson package.
Beneath adds some useful features for shared data, like the ability to run SQL queries, sync changes in real-time, a Python integration, and monitoring. The monitoring is really useful as it lets you check out the write activity of the scraper (no surprise, WSB is most active when markets are open
The scraper (which uses Async PRAW) is open source here
The chart shows the average daily gain in $ if $100 were invested at a date on x-axis. Total gain was divided by the number of days between the day of investing and June 13, 2021. Gains were calculated on average 30-day prices.
Time range: from March 28, 2013, till June 13, 2021
Google Playstore dataset is now available with double the data (2.3 Million) android application data and a new attribute stating the scraped date time in Kaggle.
According to the author: Looking at non-suicide firearms deaths by state (2019), and then grouping by the Guns to Carry rating (1-5 stars), it seems that stricter gun laws are correlated with fewer firearms homicides. Guns to Carry rates states based on “Gun friendliness” with 1 star being least friendly (California, for example), and 5 stars being most friendly (Wyoming, for example). The ratings aren’t perfect but they include considerations like: Permit required, Registration, Open carry, and Background checks to come up with a rating.
The numbers at the bottom are the average non-suicide deaths calculated within the rating group. Each bar shows the number for the individual state.
Interesting that DC is through the roof despite having strict laws. On the flip side, Maine is very friendly towards gun owners and has a very low homicide rate, despite having the highest ratio of suicides to homicides.
Obviously, lots of things to consider and this is merely a correlation at a basic level. This is a topic that interested me so I figured I’d share my findings. Not attempting to make a policy statement or anything.
Data for word frequency in econ textbooks was compiled by myself by scraping words from 43 undergraduate economics textbooks. For details see Deconstructing Econospeak.
Data Source: from eMarketer, as quoted byJon Erlichman
Purpose according to the author: raw textual numbers (like in the original tweet) are hard to compare, particularly the acceleration or deceleration of a trend. Did for myself, but maybe is useful to somebody.
A few things to notice: It’s dangerous to be a newborn. The same mortality rates are reached again only in the fifties. However, mortality drops after birth very quickly and the safest age is about ten years old. After experiencing mortality jump in puberty – especially high for boys, mortality increases mostly exponentially with age. Every thirty years of life increase chances of dying about ten times. At 80, chance of dying in a year is about 5.8% for males and 4.3% for females. This mortality difference holds for all ages. The largest disparity is at about twenty three years old when males die at a rate about 2.7 times higher than females.
Data from:iposcoop.com From the author u/nobjos: The full article on the above analysis can be found here I have sub r/market_sentiment where I do a comprehensive deep-dive on one investment strategy/topic every week! Some of the author popular articles are a. Performance of Jim Cramer’s stock picks b. Performance of buy and sell recommendations made by financial analysts in the last decade c. Benchmarking performance of Motely fool against SP500 Funko IPO is considered to have the worst first-day return for an IPO in the last two decades. Out of the top 10 list, only 3 Investment banks had below-average returns. On average, IPOs did make money for the investor. But the amount is significantly different if you got allocated the IPO at offer price vs you get the IPO at market price. Baidu.com made a whopping 354% on its listing day. Another interesting observation is 6 out of 10 companies in the list were listed in 200 (just before the dot com crash)
Check out the FAS site for notes and caveats about their estimates. Governments don’t just print this stuff on their websites. These are evidence-based estimates of tightly-guarded national secrets.
Of particular note – Here’s what the FAS says about North Korea: “After six nuclear tests, including two of 10-20 kilotons and one of more than 150 kilotons, we estimate that North Korea might have produced sufficient fissile material for roughly 40-50 warheads. The number of assembled warheads is unknown, but lower. While we estimate North Korea might have a small number of assembled warheads for medium-range missiles, we have not yet seen evidence that it has developed a functioning warhead that can be delivered at ICBM range.”
The author used several sources for this video and article. The first, for the video, is GitHub Archive & CodersRank. For the analysis of the OSCI index data, the author used opensourceindex.io
2021 is straight projections, must be taken with a grain of salt. However, the assumption of continuous rise of murder rate is not a bad one based on recent news reports, such as: here
This image was generated for my research mapping the privacy research field. The visual is a combination of network visualisation and manual adding of the labels.
The data was gathered from Scopus, a high-quality academic publication database, and the visualisation was created with Gephi. The initial dataset held ~120k publications and over 3 million references, from which we selected only the papers and references in the field.
The labels were assigned by manually identifying clusters and two independent raters assigning names from a random sample of publications, with a 94% match between raters.
This is a randomized experiment the author conducted with 450 people on Amazon MTurk. Each person was randomly assigned to one of three writing activities in which they either (a) described their phone, (b) described what they’d do if they received a call from someone they know, or (c) describe what they’d do if they received a call from an unknown number. Pictures of an iPhone with a corresponding call screen were displayed above the text box (blank, “Incoming Call,” or “Unknown”). Participants then rated their anxiety on a 1-4 scale.
A meta-database of links to known face image databases. If you need faces images to train/test your machine learning algorithms, or stimuli for research on faces, you will probably find this useful.
A pre-print describing this, alongside a related resource (the Chatlab Facial Anomaly Database), is available here: psyarxiv.com/54utr/
If you like it, you can give upvotes into our kaggle platform. The authors are trying to make custom dataset and open-sourcing on Kaggle to make AI models more robust.
The RSDD (Reddit Self-reported Depression Diagnosis) dataset consists of Reddit posts for approximately 9,000 users who have claimed to have been diagnosed with depression (“diagnosed users”) and approximately 107,000 matched control users. All posts made to mental health-related subreddits or containing keywords related to depression were removed from the diagnosed users’ data; control users’ data do not contain such posts due to the selection process. Access it here
We consume China’s products in the G7, so we are partly responsible. China is the workshop of the world and we have outsourced our carbon emissions to them. If only I had per capita consumption data – from the factory to the consumer – this picture would look really different. This is probably what I will try to create for my next post.
Inspired by this comment by /u/psychopompandparade, here’s a “what do they call the bird the English world calls turkey” chart. Data from Wiktionary, chart made using graphviz and Python.
A few etymological notes
The word turkey originally referred to guinea fowl, an African bird imported from Madagascar via Turkey (and later called guinea fowl when it was brought by Portuguese traders from West Africa). It later started referring to the North American bird, either because it was viewed as a species of guinea fowl, or because it too was brought by way of the Ottoman Empire.
The French dinde (a contraction of poulet d’inde) and its various derivatives is based on the misconception that the New World was Eastern Asia. The Greek γαλοπούλα looks like it derives from Γαλλία “France” + πουλί “bird”, but actually the prefix is a contraction of the Venetian galo d’India “Indian cock”.
The Dutch (and Scandinavian) kalkoen refers to Calicut, which is modern day Kozhikode in Kerala. No idea why.
One of the larger datasets I know of is: this one and this one
It contains transcript data for 5,850 complete conversations. It is a paid for dataset however many universities that have a membership already can get it for free.
SPGISpeech We are excited to present SPGISpeech (rhymes with “squeegee-speech”), a large-scale transcription dataset, freely available for academic research. SPGISpeech is a corpus of 5,000 hours of professionally-transcribed financial audio. In contrast to previous transcription datasets, SPGISpeech contains a broad cross-section of L1 and L2 English accents, strongly varying audio quality, and both spontaneous and narrated speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted, including capitalization, punctuation, and denormalization of non-standard words. You can read more about SPGISpeech here.
There are tools out there like snapcrawl. which takes snapshots of websites so I have clean UI images but the problem is generating distorted UI images. I have to manually distort them in a photo editor and it’s taking a lot of time to generate the distorted images. I’m looking for an already existing repository of clean and distorted UI images or even a tool which will automatically distort the UI.
In 2017, in which nation, on average, do you work the most per year? The nation where people work the most is Cambodia. In fact, the average hours worked is 2455 hours per year. Next, with about 18 hours difference on average, is Myannmar. These two nations are the only ones in the world that exceed 2400 hours per year. In third place, on the other hand, is Mexico, with 2255, followed by a series of countries, including Malaysia and South Africa, which have a value of 2200 hours and 2250.
Tamil and malayalam have some similarities, but are quite different. Malayalam diverged from old Tamil over a thousand years ago
Then there’s Telugu and Kannada which diverged relatively recently from each other. Telugu is sometimes called “sweet Kannada”. There’s a really cool historical chart of how the scripts diverged in the hampi museum. (Managed to find it: )
Besides those 4 big ones, you have other minor languages typically sharing one of the scripts from the big 4 or devanagari. Tulu for example.
Then there’s Konkani spoken on the west coast which is Sanskrit/Marathi descended.
So I’d say Telugu and Kannada are enough similar to be compared to romance languages. But they’re quite different from Tamil which is quite different from malayalam. And then those 4 are completely different from Hindi and other Sanskrit derivatives, and I know most Hindi/urdu speakers can somewhat pass in Punjabi, but I don’t think the same is true for Bengali, or Marathi
The author used data from the article ” Production, use, and fate of all plastics ever made” by Geyer et al. 2017, to point out the significance of plastic waste generation in the packaging industry.
The only way we can make a meaningful change with our plastic pollution is if large corporations find an alternative to single use plastic packaging. Plastic is an incredibly useful material and has proved to be cheaper and greener to produce. But even recycling some plastics cause more waste and unfavorable byproducts than the original production and most plastics simply just don’t break down in a landfill. Vicious loop indeed. Production of plastic is good, but using it is bad. Relatively speaking of course, ideally not using plastics at all would be the solution. But how would we package stuff, really? That’s one hell of a problem to solve.
In this dataset you can find real and nominal silver & gold prices since 1791 to 2020. The explanation of the differences between real and nominal prices are:
· Nominal values are the current monetary values. · Real values are adjusted for inflation and show prices/wages at constant prices. · Real values give a better guide to what you can actually buy and the opportunity costs you face.
Example of real vs nominal:
· If you receive an 8% increase in your wages from £100 to £108, this is the nominal increase. · However, if inflation is 2%, then the real increase in wages is (8-2%) 6%. · The real wage is a better guide to how your living standards changes. It shows what you are actually able to buy with the extra increase in wages. · If wages increased 80%, but inflation was also 80%, the real increase in wages would be 0% – in effect, despite the monetary increase in wages of 80%, the amount of goods and services you could buy would be the same.
Here is an analysis-ready version of the United States Consumer Product Safety Commission’s National Electronic Injury Surveillance System (NEISS) data from 2016-2020.
The bit.io repository also links to the R script used for cleaning the data. The major data cleaning steps involved merging multiple years of data (originally need to download year-by-year as excel files) and translating numerical codes to more descriptive values (e.g. injury type 67 to “Electric shock”). This involved quite a bit of careful alignment across years of data.
Some key data characteristics:
More than 1.5M records of product-related injuries
Five complete years of data
Categorical columns indicating (1) which product(s) were involved in the injury; (2) which body part(s) were harmed; and (3) what the diagnosis was
Weights to extrapolate from individual records to nationally-representative estimates
Narrative summaries of each incident (I think there’s a lot of potential for some kind of NLP project with these summaries).
Projects the author has done with the data so far (more self-promotion, and hopefully some inspiration):
Local Outlier Factor Analysis with Scikit-Learn: includes a section applying outlier analysis to the NEISS data and concludes that holidays are outlier in terms of patterns of injuries (July 4 Fireworks).
Question: I have been researching some science problems that could be answered with queries and analysis of large or big datasets from public sources in climate, environmental data, energy, utilities, infrastructure, astronomy, economics, labor and industry, education, sociology, and health.
Probably the largest obstacle in many of these public datasets is the inability to conveniently run ad hoc analyses on just the data you need. Often the data lives in massive stores of file archives instead of databases.
Most convenient would be datasets already stored in a database that has some in-database processing and analytics available to aggregate or filter the data being queried before a data transfer.
Are there any public large datasets with such a convenient interface or API? The best I know of is perhaps:
-Socrata SODA API for mixed/misc. gov data
-CIA Factbook API
-FRED API and web viz for economic data
-Google Finance API
-Census.gov API
-Weather.gov API (forecasts/alerts only)
-Skywatch API
-USGS Earthquakes API
-openFEMA API
-openFDA API
-NHS APIs
-WHO GHO OData API
-John Hopkins COVID-19 API
-Google ngram API
-Kaggle (usually deprecated/non-updated sets)
-AWS Open Data (no free basic processing at all)
-Bigquery Public Datasets (1TB of free queries, 1TB scans is quite limiting)
but most of these are extremely basic =match filters. I’m looking for other better examples before investing time to transfer large amounts of data just to filter other datasets down.
You can do this on bit.io; we saw this same problem and built a platform to let you query across real databases using SQL. So, for example you can take the NYTimes COVID data: nytimes_covid/And the JHU COVID data: csse_covid_19_
And you can write SQL that joins them by FIPS code:
SELECT
state, county, date, filename, cases,
"bitdotio/nytimes_covid"."us_counties".deaths AS nytimes_deaths,
"bitdotio/csse_covid_19_data"."csse_covid_19_daily_reports_us".deaths AS csse_deaths
FROM
"bitdotio/nytimes_covid"."us_counties",
"bitdotio/csse_covid_19_data"."csse_covid_19_daily_reports_us"
WHERE
"bitdotio/nytimes_covid"."us_counties".fips=6059 AND
"bitdotio/csse_covid_19_data"."csse_covid_19_daily_reports_us".fips=6059
AND date=last_update::date
The author uploaded a dataset of MRI Scans for brain tumor segmentation. It is the training set for the BraTS competition for the years 2018, 2019 and 2020. The data contains MRI scans and expert segmentations for HGG and LGG (high grade and low grade gliomas), as well as survival data.
It can be used for tumor type classification, tumor segmentation and survival analysis.
All Digitized Texas Appeals Court Cases Since 1900 – 12GB – 696,036 cases
From the authors: We are sharing an open OSDG Community Dataset(OSDG-CD) on our GitHub. The dataset contains thousands of text excerpts labelled by citizen scientists from around the world with respect to the UN Sustainable Development Goals (SDGs).
The data can be used to derive insights into the nature of SDGs using either ontology-based or machine learning approaches.
OSDG-CD is a direct contribution of hundreds of volunteers who have already taken part in the OSDG Community platform citizen science exercise. The OSDG Community Platform is an ambitious attempt to bring together volunteers and subject matter experts from all around the world to create a large and accurate source of textual information on SDGs.
How does it work? We use publicly available texts such as publications, reports and other written data sources. Each text is broken down into smaller pieces of paragraph length, and these smaller pieces are then labelled by the Community volunteers.
We are making this data open to help researchers discover new insights into and meaningful connections among Sustainable Development Goals. We would like to know what you discover in the data. So do not hesitate to share with us your outputs, be it a research paper, a machine learning model, a blog post, or just an interesting observation. If you are using the dataset in a research paper, you can attribute the dataset as OSDG Community Dataset v2021.07.
The author scripted Blender to generate a synthetic dataset for 600 unique lego parts with multiple parts per image resulting in 900,000 labeled class instances!
Posting this here to be more visible to a Google search on the off chance someone else could use it. It was used to generate Pokemon names for an AI hobby project I worked on some months ago:
The source for chosen words was Bulbapedia. The dataset had to be compiled manually as the English name origins didn’t lend themselves well to being scraped.
This map rescales each country by its CO2 emissions, rounded to the nearest 10 megatons. I did my best to preserve country shape and relative locations. Each square is 10 Mt of CO2 emitted in 2019. Countries that did not reach 5 Mt were lumped together with other countries as black squares.
Data is from here. This was made in a combination of GIMP, Python, and Inkscape.
From the author: This chart was created for the Policy chapter of the Renewables 2021 Global Status Report and is based on data from the World Bank, Energy Climate Intelligence Unit, IEA Global Electric Vehicle Outlook and the REN21 Policy Database. For more information read Chapter 02 (Policy Landscape) of the report.
Histomap: Visualizing the 4,000 Year History of Global Power
Life expectancy at birth across the US, the EU, India, and China. Data for 2019.
For comparison, in ancient Greece times life expectancy was 25 years, in medieval Europe it was 35 years, in early 19th century England it was 40 years, and in 1950 world average life expectancy was 45 years
Data Source: Twitter API. Visualization generated by my Application thevisualized.com. Every time Cristiano Ronaldo and Lionel Messi were trending on Twitter in the Year 2020 📅 Full HD Video 📹 . 🟢 Cristiano Ronaldo was trending 168+ Times in the Year 2020 #Ronaldo Trends with an average of 70.8K (Thousands) Tweets Find more on The Visualized Twitter Timeline of Cristiano Ronaldo 93.5M Followers Cristiano 📊 . 🔴 Lionel Messi was trending 245+ Times in the Year 2020 #Messi Trends with an average of 97.3K (Thousands) Tweets Find more on The Visualized Twitter Timeline of Lionel Mess 3M Followers TeamMessi 📊 . Find what’s currently Trending Worldwide trending
This visualization showcases the proportion of energy generation in each state by carbon and carbon-free energy sources. A greener shade correlates to a higher proportion of green energy generated in that state. Let me know any suggestions or insights you might have! Also, would energy consumption data be more interesting than energy generation? Let me know!
The Pacific Northwest is a bright light of non-carbon energy in the form of nuclear.
Also, S/O to that midwestern corn and its role in the biofuels industry.
The World Index of Moral Freedom is sponsored and published by the Foundation for the Advancement of Liberty, a libertarian think tank based in Madrid, Spain. The Index is an international index ranking one hundred and sixty countries on their performance on five categories of indicators:
religious freedom (taking into account both the freedom to practice any religion or none, and the situation of religious control on the state);
bioethical freedom (including the legal status of abortion, euthanasia and other practices pertaining to bioethics, like surrogacy or stem cell research);
drugs freedom (including the legal status of cannabis and the country’s general policy on hard drugs);
sexual freedom (including the legal status of pornography and sex services among consenting adults, and the country’s age of sexual consent), and
family and gender freedom (including women’s freedom of movement, the legal status of cohabitation of unmarried couples, same sex marriage and the situation of transgender people).
The religion freedom indicator remained almost the same at 97.13 vs 97.12
The bioethical freedom indicator decreased slightly from 89.38 to 88.13
The drugs freedom indicator increased significantly from 45.75 to 65.18
Sexual freedom decreased dramatically from 73.50 to 30.00
Gender & family freedom decreased slightly from 90.00 to 88.00
The 2020 report attributes some amount of the loss to methodological changes (most severely impacting Cambodia’s ranking), but the decline seems to be driven primarily by the sexual freedom indicator. Here’s what the 2020 report has to say on that category:
Sexuality indicators
How free are sexual intercourse, pornography and the provision of sex services
As the sexual revolution keeps spreading to reach all places, the amount of government interference provides useful information on a country’s individual freedom on moral decisions. In this category, indicator weights are more distributed: 40% is allocated to the free consumption of pornographic content. This is significant because censorship still plays a role in many countries, while technology makes it increasingly harder for states. 35% is reserved to the legal status of prostitution, and 25% to the legal age of sexual consent.
It passed in 2018 and made it a lot harder to advertise prostitution online
The source doesn’t even break down the scores. I don’t understand how there could be such a discrepancy between France and Spain for example, anyone got a clue?
Edit: nvm, it does, it’s just weirdly formatted. The gist is France heavily criminalizes drugs, Spain does not. All of the other differences between the two countries are mostly ignored by this study. Besides French draconian drug laws, bioethical freedom may account for the discrepancy too. Euthanasia in Spain is legal and publicly funded, framed within the public health system. In France, not so much.
Deaths from all causes in the United States: year-to-year comparison 2015-2021 (through week 30)
Source: CDC (export weekly deaths by state and age file)
Finns: Often pessimistic by nature and reserved about their emotions, drink too much, it’s dark, the winters are cold and hard psychologically. Also Finns: We are the happiest!
The discrepancy comes from the fact that the happiness study is, paradoxically, not actually about happiness. The World Happiness Report is not an emotional study at all, it is rather a look at the quality of life (GDP, education, health, security, freedom,…) around the world.
It should be labeled as potential happiness, not actual happiness, because actual happiness is impossible to measure. But I wouldn’t say measuring smiles in the street is a good way either, in many cultures smiling is customary, not necessarily a indication of happiness.
Happiness can be measured by self reporting in a survey. How happy are you with life right now 1-10? It’s a subjective data point but happiness is also subjective after all.
Measuring smiles in the street as a way to measure happiness seems insanely ridiculous.
Self-reporting is also a weird thing. People tend to measure their life against their surroundings and are affected by small-scale personal events.
If I lived my entire life in safety and comfort, I am most likely to take those things for granted and not consider them as contribution to my happiness. My personal problems on the other hand can affect my emotional state quite a lot. If my mom is ill or I’ve had a bad fight with my best friend, I would be far from happy no matter where I live.
I actually found an interesting graphic here that breaks down where most of our crashes and fatalities come from! It doesn’t include winter conditions as a factor, so I can’t use that information, but according to this: single car alcohol related accidents are our #1 killer. We have a rampant drinking and driving problem here.
Classic Machine Learning Algorithms
Each chapter in this book corresponds to a single machine learning method or group of methods. Each method includes the elaboration of Concepts and the implementation of Python Codes (construct algorithm from scratch).
SQL is definitely one of the most fundamental skills needed to be a data scientist.
This is a comprehensive handbook that can help you to learn SQL (Structured Query Language), which could be directly downloaded here
Credit: D Armstrong
How important in life is family, work, friends, leisure, religion, and politics?
Answers from the World Values Survey. Results from each region of the world in separate images.
Data from the World Values Survey and European Values Survey, wave 7. All data (as well as from previous waves) can be accessed and analyzed online here.
In general about 1000-3000 people answered the survey in each country. Many countries, for instance India, did not take part in the survey this wave. Some have however been part of previous waves, and their answers can be analyzed online.
Core concepts of machine learning The history of ML ML and fairness Regression ML techniques Classification ML techniques Clustering ML techniques Natural language processing ML techniques Time series forecasting ML techniques Reinforcement learning Real-world applications for ML
-Start with a pre-lecture quiz -Read the lecture and complete the activities, pausing and reflecting at each knowledge check. -Try to create the projects by comprehending the lessons rather than running the solution code; however that code is available in the /solution folders in each project-oriented lesson. -Take the post-lecture quiz -Complete the challenge -Complete the assignment
It is a pretty comprehensive course with all the material you need to learn. Enjoy! Check it out here:
Database to collect US Schools identifying information, K through 12 and Post Secondary.
The Rosenbrock dataset suite for benchmarking machine learning algorithms and platforms
This post introduces the Rosenbrock function to measure a machine learning platforms’ data capacity, training speed, model accuracy, and inference speed. Rosenbrock datasets have a strong consistency and do not have noise. For this reason, it is a powerful alternative to datasets from popular repositories for benchmarking.
Poverty headcount ratio at $1.90 a day (2011 PPP) (% of population)
Prevalence of violent crime
Cost of living. I want to know given $X USD, how far that gets you for rent, groceries, eating out, etc. There seems to be tons of stuff that tries to estimate this but what’s the best one? Is the Big Mac index a good reference or a meme?
Average historical temperature, and precipitation (rain) in January
Whether it’s land-locked or coastal (for beaches. I’ve been land-locked my whole life.)
Native languages spoken
Population density
Poverty headcount ratio at $1.90 a day is the percentage of the population living on less than $1.90 a day at 2011 international prices.
World Bank, Development Research Group. Data are based on primary household survey data obtained from government statistical agencies and World Bank country departments. Data for high-income economies are from the Luxembourg Income Study database. For more information and methodology, please see PovcalNet.
A Dataset of Cryptic Crossword Clues
A dataset of cryptic crossword clues, collected from various blogs and publicly available digital archives.
It’s a little over half a million clues from cryptic crosswords published in British newspapers over the past twelve years.
The Upworthy Research Archive, a time series of 32,487 experiments in U.S. media
The dataset is available under the Creative Commons Attribution 4.0 International License on the Upworthy Research Archive website at natematias.com, with an archival copy on the Open Science Framework20. The website include a description of each column in the data, a list of resources and papers based on the dataset, and guidance for meta-analyzing the included experiments. The data are stored using a plain-text, ASCII-encoded, comma-delimited CSV file.
Notes from the author: This map depicts the number of open missing persons cases per 100k people in each US State as of September 23, 2021. NamUs collects data from law enforcement agencies and provides data and services to forensic investigators to locate and identify missing persons and unidentified bodies. It is important to note that while efforts are currently underway for more accurate counts at local, state, and national levels, the true number of open missing persons cases among indigenous persons is unknown due to systemic issues. Of the current number of open missing persons cases, approximately 3.6% are for indigenous persons but this number is estimated to be higher. The ongoing Gabby Petito investigation inspired me to look further into this topic and I was personally surprised to see the shocking numbers of missing persons in many of these states. I encourage people to look through the database provided to understand the issue further and to read about the efforts to ensure more accurate counts of the missing indigenous persons in this country. As always, I am open to constructive feedback and questions about this map. So please leave a comment or question and I will try my best to answer you soon. Thank you for reading and please be kind and look out for each other. Stay awesome Reddit.
Source: IHME’s Global Burden of Disease Study 2019 [link]; Fauci et al. JAMA 2019 [link]; Vela et al. AIDS 2012 [link]
Tools: R
The HIV is one of the most severe epidemics in human history. It disproportionally affects demographic groups with limited access to steady and quality healthcare such as racial/ethnic minorities, people with alternative sexual orientations, people who inject drugs, and people living in poverty. Only a coordinated, global action, has made possible the improvements that now, after over 30 years of fighting the epidemic, we can see. Ending the HIV/AIDS epidemic requires innovative solutions to understand the healthcare access barrier in each setting and finally provide care (for diagnostics, preventions, and sustained antiretroviral therapy) to all people-at-risk and living with HIV.
For more details of the evolution of the epidemic and the actions deployed to contain it, see this very interesting timeline created by HIV.gov: and here
The top goal scorers in 40 years of elite football (soccer)
The top goal-scorers in elite club football since 1980
This viz aggregates all passes on a grid with 1 meter step. It means, all distances and passes on a square meter of the football pitch represent by a line with average length and direction. So this viz is an ‘averaged’ picture.
The data includes:
– Champion League 1999 – 2019
– FA Women’s Super League 2018 – 2020
– FIFA World Cup 2018
– La Liga 2004 – 2020
– NWSL 2018
– Premier League 2003 – 2004
– Women’s World Cup 2019
Figures are for all senior club matches played in a league or cup in England, Spain, Italy, Germany or France, or in international tournaments such as the Champions League, Europa League and Cup Winners’ Cup.
The 186 players shown are those who have scored at least 20 league goals in any single season of one of Europe’s big five leagues since 1990, and have a career-average goal-scoring rate across senior matches in these five countries of at least 0.4 non-penalty goals per 90 minutes.
Top 40 single-year* goalscoring performances since 1980
Data from: IEA, Global Carbon Project, IPCC, FAO, World Resources Institute
Viz Made with Google slides
Some interesting comments:
1- I find it crazy that there’s about 30,000 planes or so around the world and about 1,500,000,000 personal cars, that’s 50,000x more, yet cars only produce 3.5x as much pollution. Even crazier is how cargo ships, who spew out some of the most foul crude oil emissions, produce the same amount as planes. I would have never thought lol
2- Utility-scale solar is increasingly cheaper than the operational expense of maintaining a coal plant, and grows ever more so. Economics is no longer the central problem for coal; the obstacle is entrenched fossil capital throwing all the political heft it can behind a losing hand (as well as, to a certain extent, pressure from military planners for autarky)
3- In Canada we have an issue where emissions from almost all sources and places going down is counter acted by emissions from Alberta and Saskatchewan going up, mostly due to their fossil fuel industries. Where in this graphic would the emissions created by the extraction, processing and distribution of fossil fuels go? In the other industrial usages categories?
4- Switching road vehicles to EVs will get rid of ~6 Gt.
(increased direct electricity offset by reductions in refining electricity, reductions in fugitive emissions, reduction in ocean freight emissions).
This is underway, but the more we push (as individuals and as voters) the sooner it can happen.
Data from CDC WONDER query. Visualization with Tableau.
Some notable comments:
1- Halved in a decade? What was it like before? Presumably on a decline as steep as it is now?
2- It’s fascinating to me as I don’t see this as being so different from 2010. I guess sometimes we’re just blind to the short term changes going on around us.
3- We are living in the lowest violence period in history. We have included lots of stuff that used to be accepted ( like child abuse and wife beating) and it’s still way down. Kids today are just so much better than old people. Old people ? They couldn’t wait to get knocked up!
4- It’s not really that they couldn’t wait to get knocked up. It’s generally either that they were actually capable of making an income necessary to support a family way younger than today if we are looking at 50s and 60s data when the average marriage age was at its lowest recorded in US history, or they didn’t want to get pregnant but didn’t have sex education or birth control. As far as violence goes unfortunately the pandemic era data is looking worse that 2019.
Share of men and women who smoke daily per country
This graph represents the highest-paid athletes of all time as of November 2021 adjusted for inflation.
Thanks to Sportico and Kurt Badenhausen for the data and information to create this graph. Among the sources used were confidential interviews data from Forbes on earnings from endorsements, memorabilia and appearances, as well as payments received for participating in the sports the highest earners dominated. Forbes also looked at the length of careers and made adjustments for inflation.
Through endorsements, off the court investments, and business dealings these athletes have amassed a fortune that will live on with their legacy for generations. Obviously and not surprising is MICHAEL JORDAN at the top who many consider to be the greatest athlete of all-time, followed by Tiger Woods who is often listed as the best golfer of all-time, two historically great professional golfers, and various other sports stars that dominated their respective sports.
Fun Fact: No other professional sports team has more members on the list than the Los Angeles Lakers – thanks to LeBron James. Nearly half of those on this graph are still performing today – some even in their prime.
Original StatsPanda Visualization
Source: Sportico, Forbes
Tool: Canva/ Adobe Prototype/ Microsoft Excel/ Magic *wink wink
The original viral map (the first image on that Wikipedia page) was cool but it almost kinda annoyed me when I saw it on reddit several years ago since they didn’t account for the distortion of the map projection (it looked like a circle on that image but wouldn’t look like a circle on a globe) and I didn’t know if that was the smallest that they could’ve made that circle. A Singaporean professor named Danny Quah apparently also had the same thoughts, and he found a circle (that would actually be a circle on a globe) of radius 3300km instead of ~4000km like in the original image; that’s the second image on that wikipedia page. I achieved a better result than Quah for the 50% circle (3281km instead of 3300km) since I analyzed the population data at a <1km resolution instead of 100km resolution like he did (I’d guess we actually used the same population data since he also used 2015 data and there aren’t many competing datasets for this sort of stuff). I was able to do this without the code taking 10,000 to 100,000,000 times longer (1002 to 1004 , depending on what exactly it means to be analyzing the data at a 100km resolution) by using this technique https://en.wikipedia.org/wiki/Summed-area_table and generating a single circular kernel for each latitude. Even with this considerable speedup, the population data was so high resolution (much higher resolution than this image) that I had to run the program overnight. I find it interesting that unlike both the original circle and Quah’s circle, my 50% circle doesn’t include any of the island of Java (it’s better to be further north to get more of northeastern China, Korea and Japan it seems).
Here are some maps I made using this program where instead of specifying a percentage of the world’s population and asking the program to find the smallest circle containing at least that many people, I specified the radius of the circle and asked the program to find the most populous circle of that radius:
Created in Adobe Illustrator. Data from Rosenfeld, et al. PNAS 2019, “Disintermediating your friends: How online dating in the United States displaces other ways of meeting.” Additional explanation here
I plan on using it once or maybe twice every month for backup, otherwise it will just sit somewhere as a backup. Which one would you all recommend? Thanks. submitted by /u/QWERTYMNBVC123456789 [link] [comments]
I have five 10TB drives in a RAID5 configuration. About 25TB are used. I have quite a few miscellaneous drives of different sizes ranging from 1TB to 8 TB. Is there a way i can use all those individual drives to backup my data and somehow keep track of the files, without putting them in a nas or lvm them? What are your backup solutions? submitted by /u/Pm_me_your_sparrow [link] [comments]
Hey I have a question How can I download videos embedded on a website whic I pay for and have an account for offline use to a usb. And also does downloading said videos send data/info whatever to the host that I am indeed downloading these videos? submitted by /u/NoMemez [link] [comments]
Suppose that we want to back up the boot volume of macOS/Unix so that when the system fails and becomes unbootable, we can restore it to a bootable state as fast as possible. Can restic/Kopia/etc do this job? How do I run them when the machine is not bootable (but perhaps a recovery terminal is usable)? And are any other tools more suitable for this job? NOTE: the boot volume contains data, so the incremental backup features for data volumes are all needed. (Prefer using open-source software) [Originally posted here: https://www.reddit.com/r/selfhosted/comments/16u1zlq] submitted by /u/spherical_shell [link] [comments]
I recently picked up this set of microfiche's for Caterpillar equipment at a thrift store. I want to digitise all of them, I've found a library that has digital readers I can use. But it's going to be a massive undertaking, so I wanted some help in digitizing them. Where do I host these PDFs after I digitize them? Would copies of these already exist on the internet, or is it locked behind a paywall on the SIS system by CAT? I'm sort of new to this so I'd appreciate all the help in terms of digitizing and hosting them? For now, I was thinking I'd make a massive dump and upload it on the Internet Archive for now. submitted by /u/TusharDaniel [link] [comments]
Hi guys, I have a modest home server with 5x2TB drives running in raid5. It also runs a handful of Dockers and is doing pretty fine. The thing is, it's getting to capacity mainly due to movies and shows. I was thinking about getting a small two bay NAS and put a big drive (18TB or so) in it and offload all my media to that. That way the server can be used to backup my other systems and more important things. If the media is lost it would suck, but not in the same way as loosing all the baby pictures and work files. I was thinking of also using the new drive to backup the server to aswell. I know in a perfect world I would have an off site back up as well, but that isn't the case. Would this be a sensible way of going about data redundancy or am I missing something? Any other recommendations you guys would have? Many thanks for the advice! submitted by /u/PianoViking [link] [comments]
I'm working on an academic research project. I have a Twitter basic developer account and needed to scrape the followers and following lists of a user using Tweepy. I've been able to retrieve user IDs successfully as well as Tweet IDs, however I do get errors while scraping the followers and following lists (403 Forbidden) and a remote disconnection error while scraping Tweet text (even though the Tweet IDs get retrieved just fine). Is it possible to scrape the followers/following? I need help! submitted by /u/edenoverthrown [link] [comments]
I apologize if this is lazy (because it is), but it is pretty late here where I am, and this happened very recently like some hours ago, additionally I am pretty scared right now, so I will just simply copy and paste comments from another sub discussing the changes that can explain the situation: I really don't like that they've effectively replaced CS:GO - like now Steam says I reviewed CS2 in 2013 lol. I've always liked being able to go back to 1.6 and Source, but it seems GO doesn't get the same museum/final curtain. Wait so you can’t play CSGO anymore? Usually you can still play the older CS games You can (or at least will be able to) downgrade the game to the last CSGO build and play on community servers, I imagine. Similarly to how it's done with the 2012 CSGO build. You can already do it, it's listed as "csgo_demo_viewer - 1.38.7.9" in the betas. Seems to be the final build of CSGO. You can host and join servers fine. Only thing gone is the matchmaking, which is understandable. People getting upset at "CSGO being gone" have no clue what they're talking about. Valve probably kept the same appid to avoid having to deal with messing around with player inventories. Now this is what I most fear: the potential staggering loss of decade-old custom mods and community servers What happens to all the previous mods the game used to have? Warcraft servers, Zombie, Surf maps. Are they all essentially gone until someone decides to update it? EDIT: All mods are essentially dead and have to be updated. RIP. .....no fucking way, is Valve (or whatever the hell that company is now) now outright trying to kill decade-old mods and community servers? just what the actual fuck? I cannot believe that I am actually reading this, that dreaded day that I always feared has finally come? Please tell me that that all of these maps and mods are well-preserved, speaking with experience as a CSS Zombie Escape player, from what I know pretty much all CS:GO ZE maps originate from CCS, I am not sure if the maps had to be changed a lot to fit into CS:GO. I at least hope that the digital preservation/lost media community starts looking into old Source Mods, they are an extremely deep rabbit hole of memories that are almost 2 decades by now, god forbid they try doing this with TF2, custom community mods is the sole thing that I have ever played on TF2 since 2009, modded servers are source engine games to people like me. Looks bad. Local surf server dev said hes going to focus on momentum, but it's going to be a huge project for any server. Gotta wait on aource2mods first. Then start rebuilding plug-ins. Maps have to be ported as well. I haven't surfed much in a while, but it was a huge part of my life for about 5 years. Rip. You can still play on a beta branch of cs2 for now. Who knows how long that stays up though. Those days were already gone with CS:GO I feel like, way less active than CS 1.6 or CS:S. In CS2 here isn't even a native Server browser anymore, it just opens Steams Game Servers. Not even Valves official game modes like Arms Race, Demolition, Danger Zone etc. are in the game currently. It seems a bit rushed so they could stay in the summer release window. Overall, is there a way for people to preserve whatever they still have installed of custom CS:GO content and CS:GO itself? submitted by /u/wq1119 [link] [comments]
They are selling these new at some decent prices, can anyone report on upsides/downsides to these? I am wondering about usage in a desktop workstation. (Edit for usage clarification). submitted by /u/ebullientmarshmallow [link] [comments]
I was comparing cloud prices and when looking at MEGA i noticed they have the run of the mill 2TB for 10 bucks plan, but also a a 16tb for 25 bucks (if paid yearly) which seems to be a really good price for cloud storage Is it good or am i overlooking something? I know they have a transfer quota (16TB a month in this case so nbd personally) but is there something else they don't tell you? Are they reliable? submitted by /u/Freaky_Freddy [link] [comments]
I plan to purchase Marvell® 88SE9230 to use on Windows 10 for Raid 1 what you think? Is it Good? How good its software? Easy to use? If controller dies i can just use disks as standalone? this is possible with intel raid controller embedded on motherboard https://preview.redd.it/k4c93jxuqvqb1.png?width=550&format=png&auto=webp&s=6f31eec9720ff3407000b7a3d3a4b1ca64a1c3e8 submitted by /u/CeFurkan [link] [comments]