How does a database handle pagination?

How does a database handle pagination?

How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.

Second, pagination is generally a function of the front-end and/or middleware, not the database layer.

But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.

That may not be the most efficient or effective implementation, though.

How does a database handle pagination?

So how do you propose pagination should be done?

On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.

Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?

I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.

If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.

With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.

More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)

It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.

Pass the AWS Certified Machine Learning Specialty Exam with Flying Colors: Master Data Engineering, Exploratory Data Analysis, Modeling, Machine Learning Implementation, Operations, and NLP with 3 Practice Exams. Get the MLS-C01 Practice Exam book Now!


Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.

I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.

But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?

If it does the full fetch every time, then it seems quite inefficient.

If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?

something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.

Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.

YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.

From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.

And some theory – see this answer to know what is purpose of Pagination pattern

  • Unlocking the Power of Custom Databases in Your App Starter Project
    by (Database on Medium) on February 26, 2024 at 6:12 pm

    Welcome back to our insightful series on mastering the App Builder! If you’ve been following along, you’re already familiar with…Continue reading on Medium »

  • Tarea 1 — Base de datos transaccional
    by German Rodriguez (Database on Medium) on February 26, 2024 at 5:30 pm

    La siguiente base de datos transaccional surge a base de un reporte de ventas de automóviles en los Estados Unidos del periodo del…Continue reading on Medium »

  • Jogadores Mais Vistos do Dia — 26/02/2024
    by Tudo pelo Futebol (Database on Medium) on February 26, 2024 at 5:20 pm

    Continue reading on Medium »

  • Jugadores Más Vistos del Día — 26/02/2024
    by Todo por el Fútbol (Database on Medium) on February 26, 2024 at 5:20 pm

    Continue reading on Medium »

  • Most Viewed Players of the Day — 02/26/2024
    by Everything for Football (Database on Medium) on February 26, 2024 at 5:20 pm

    Continue reading on Medium »

  • Tableland: Powering the Next Wave of Web3 Innovation
    by Ledioply (Database on Medium) on February 26, 2024 at 4:36 pm

    The digital frontier is expanding, and with it, the call for decentralized technologies that guarantee user privacy, data integrity, and…Continue reading on Medium »

  • How I used heatmap to reduce time to value for SaaS product
    by Reynaldo Costacurta (Database on Medium) on February 26, 2024 at 4:16 pm

    This is a story about how I used the tools that Hotjar offers to analyze and identify where friction was generated in the navigation…Continue reading on Medium »

  • 理解数据库怎么使用索引
    by Cody Tseng (Database on Medium) on February 26, 2024 at 3:52 pm

    不要再用排列组合的方式寻找合适的索引了!了解数据库如何利用索引,更轻松地设计索引Continue reading on Medium »

  • MongoDB Sharding Nedir? Nasıl Kurulur?
    by Özgür Eskikoy (Database on Medium) on February 26, 2024 at 3:36 pm

    MongoDB sharding’in asıl amacı veriyi yatay olarak farklı sunuculara dağıtarak veritabanının yükünü paylaştırarak performansını ve…Continue reading on Medium »

  • Demystifying Database Interactions in Python with SQLAlchemy
    by Max N (Database on Medium) on February 26, 2024 at 3:14 pm

    A Practical Guide to Seamless Database Integration Using SQLAlchemyContinue reading on Medium »

  • Best practices for database benchmarking
    by /u/wheeler1432 (Database) on February 26, 2024 at 2:17 pm

    submitted by /u/wheeler1432 [link] [comments]

  • CMS vs Database
    by /u/Averroes2 (Database) on February 26, 2024 at 2:21 am

    Can you make a CMS into a database or make a database into a CMS? submitted by /u/Averroes2 [link] [comments]

  • How do you structure a Postgres(Supabase) DB for user defined values
    by /u/MarkwinVI (Database) on February 25, 2024 at 10:09 pm

    Hey all, I'm trying to create a pretty common functionality nowadays. Let's say user has a table of items with a few values. I want to allow the user to: Create their own custom values for the item (text, date, single select, multiselect) Edit the values name either for the key or the specific value at any point The data will be queried frequently On my first approach I started with EAVs, but the more I work with them the more complex they are to manage. In my second approach I'm trying a more JSONed approach, where: ​ I have a "config" table that has a defining column that looks something like this ​ [ { id : 1, name : "status", type : "select", values : [ { id: 01, value: "todo" }, { id: 02, value: "done" } ] } ] And then in my items table a column that manages all the values: [{ id: 1, value: 01 }] However, this approach also seems very error prone. At the end I expect users to query the items like so: "Give me all the items where status equals 'done' and priority is 'high'" Group items by status and order them I'm not building an enterprise app, so I just need something that I can trust that will allow me to move on from this problem. Thank you! submitted by /u/MarkwinVI [link] [comments]

  • Unique constraint error in Oracle
    by /u/OreoWaffle96 (Database) on February 25, 2024 at 9:59 am

    I tried to create a PL/SQL program to insert values in a table 3 times.. It gave me the unique constraint error I've attached the pictures above I searched everywhere but couldn't find a solution I believe the unique index is due to the primary key , though I'm using the primary key in the program and I'm not inserting any duplicate values still the error is arising Please help me with this submitted by /u/OreoWaffle96 [link] [comments]

  • Database Selection
    by /u/sdas99 (Database) on February 25, 2024 at 3:39 am

    I'm developing a fairly simple program and am running into scaling constraints with my current approach: 1) Data is continuously saved to a .csv (500k rows/day with 4 columns; ~25MB/day) 2) .csv is manually uploaded to a server 3) index.html webpage queries the .csv to create graphs The problem is the .csv file is getting way too big as the dataset grows. I'm hoping to find a way to scale in the least complicated way possible and was curious if folks had advice on switching to MySQL, PostgreSQL, or something else. submitted by /u/sdas99 [link] [comments]

  • Managing encrypted DB push notifications with only IDs
    by /u/CoroBuddy (Database) on February 24, 2024 at 7:46 am

    Hi everybody, ​ in our company, we have the following problem. We are switching to an encryption system that stores push notifications encrypted in our DB until they become valid for transfer to the clients. Our security recommended having all the push notifications encrypted in our DB and only organizing them via the IDs of the notification and user. So when a push notification is requested by a device/ 3rd party BE for the user, then the push notification service detects how many devices are registered creates a message for each of the devices, and encrypts this message with a specific public key of the device. The notifications can have the same content for multiple devices if one user registers multiple devices for the push notification system. Depending on the context the notification type is important to find out which of the services initiated the push notification creation, but this needs to be also encrypted. The problem is that we can't identify the types of some notifications and therefore can't call certain "bulk" deletes. Scenario 1: As a user, I want to delete/edit all my notifications of a certain type for a certain service, but I want to keep the other notifications. Scenario 2: As a user, I want to delete/edit specific notification content for multiple devices. ​ When we were scrapping our heads around, I thought "ok, this could be a job for Reddit". Do you have any idea how we could manage this? submitted by /u/CoroBuddy [link] [comments]

  • Help with messagestore.db over 10 years old?
    by /u/Hoollyweeds (Database) on February 24, 2024 at 1:09 am

    Hey I basically have 2 files (text chats) that i have kept over the years in hope to gain access to them one day, so far no luck, i have never tried asking around here before so i figured why not. I have no idea how anything of this works so i don't even know if its possible to open them anymore. submitted by /u/Hoollyweeds [link] [comments]

  • Time series database
    by /u/Either_Vermicelli_82 (Database) on February 23, 2024 at 8:17 pm

    We are a small research groups at a university and are looking into storing information from our lab equipment into a open source local time series database. The middle ware will be a mqtt broker to ensure flexibility if we ever have to switch systems at some point… Now we had a test running on influx v2 and it works but we would like to keep this system running for a fair few years (and updating) and seeing where influx v3 is going…. The setup will likely be a single database where all our equipment are streaming information to with different tags to distinguish project related measurements. Now i was wondering what is out there that has proven to be trust worthy with a stable query language and a little bit user friendly at least? submitted by /u/Either_Vermicelli_82 [link] [comments]

  • MongoDB Enterprise Advanced option
    by /u/Aztreix (Database) on February 23, 2024 at 6:40 pm

    I am considering MongoDB as the database for my application and evaluating the cost of Atlas vs a self-hosted enterprise advanced option. While Atlas cost is listed on mongo db pricing page , enterprise option is not. Anyone has any experience to share on cost of the self-hosted option. Any suggestion/opinions on costing factor between the two. Managed service would mean lesser setup and maintenance overhead, but would like to know much difference are we shelling out. [Expecting 3 servers 8 core/16GB , 1 TB storage * 2] submitted by /u/Aztreix [link] [comments]

  • Transitioning Database Access Modes: Moving from Exclusive to Shared Access in Advantage Database Server.
    by /u/xxcriticxx (Database) on February 23, 2024 at 3:57 pm

    I'm currently utilizing Advantage Database Server 11.10 on a Windows 10 platform with exclusive access rights. My goal is to enable access to a single database from multiple locations. I'm seeking guidance on transitioning the database access from exclusive to shared. I recall that earlier versions offered a registry fix for this issue. Could someone kindly direct me to the appropriate solution? submitted by /u/xxcriticxx [link] [comments]

  • History of Database
    by /u/Straight-Rule-1299 (Database) on February 23, 2024 at 3:49 am

    What would be some must-read research papers for database which show the progression of the technology? submitted by /u/Straight-Rule-1299 [link] [comments]

  • Need a good EAN database
    by /u/Cozy_Kozyge (Database) on February 22, 2024 at 10:53 pm

    Hey, im working on a product scan app, and i dont know where to look for a EAN barcode, database with an api that has all info like producer name, product name and etc. Can anybody help me with this? submitted by /u/Cozy_Kozyge [link] [comments]

  • What database would you use for a stock screener?
    by /u/spy16x (Database) on February 22, 2024 at 8:36 am

    what type of databases would be good for stock screener (e.g.,, type of use-cases? There are a lot of columns (different indicators -- almost all numerical fields) with limited/fixed no. of rows (one per stock). And user need to be able to mix and match different columns to sort and do comparisons (usually limited to =, > <) to generate the result. submitted by /u/spy16x [link] [comments]

  • Separate tables based on business goals
    by /u/8483 (Database) on February 22, 2024 at 5:49 am

    Is it worth to split tables bases solely on the business goal? NOTE: The tables all have identical columns. 1. One table One orders table, with a orderType column to differentiate them. 2. Two tables TWO tables called sales_orders and purchase_orders. 3. Many tables How about even more tables like sales_quotes, sales_orders, sales_backorders, purchase_requisitions, purchase_orders... I believe approach 3 is pushing it too far, while approach 1 is too much mental load to keep filtering 2 very distinct business goals. Approach 2 seems like just enough separation, but not sure if it will haunt me later. submitted by /u/8483 [link] [comments]

  • Need help with calling file.
    by /u/DetailedLogMessage (Database) on February 22, 2024 at 3:30 am

    Hi guys, I'm currently working with oracle database and I received multiple ".sql" files from many people (40) and I'm always responsible for assembling calling files with all SQL files that need to be executed. I want to automate it, but they do not allow me to use any tool, or compile anything. It must be a script that runs autonomously, from any folder, I've done something similar in there pay using shell script, but it wasn't good at all, too fidly. Can anyone give me suggestions? submitted by /u/DetailedLogMessage [link] [comments]

  • Time series for not metrics
    by /u/surpyc (Database) on February 21, 2024 at 5:47 pm

    I am searching for a timeseries DB where we will save data (SQL Format) and not just metrics or logs. And search without time-series option if is possible. One solution is Elasticsearch but i don't think is the correct solution for this. I found VictoriaMetris,Mimir, and Timescale but not sure if the support this or if is the correct job for this. Does anyone use anything except Elasticsearch for this? submitted by /u/surpyc [link] [comments]

  • Looking for a fast embeddable database that supports highly concurrent writing
    by /u/phaethornis-idalie (Database) on February 21, 2024 at 1:11 pm

    I have a rather unique use case for a database. Going into it here would take way too long, but basically: I'm parsing a VERY large XML file. I want to use multiple threads for it to take a manageable amount of time. I can guarantee that writes from different threads will not modify the same data. I need very few features of a conventional database. This is going to be used for a visualization, so risk of data loss, SQL, etc, are unimportant. It just needs to hold data in a structured way and let me query it with reasonable ease. Ideally, I want something with Rust bindings to minimize the amount of work I have to redo, and I need it to be fast. I was looking at RocksDB, but I was unable to evaluate from their documentation how applicable it would be. Any help would be hugely appreciated! submitted by /u/phaethornis-idalie [link] [comments]

  • When you should NOT use MongoDB?
    by /u/Lopsided-Variety1530 (Database) on February 21, 2024 at 11:46 am

    submitted by /u/Lopsided-Variety1530 [link] [comments]

  • How does Cassandra achieve strong consistency with failed writes if there is no rollback and no two phase commit?
    by /u/Rough_Source_123 (Database) on February 20, 2024 at 11:51 pm

    Confusion on how cassandra claim strong consistency with no rollback and no two phase commit if we have the following scenario Write of key1, value1 is requested with consistency level of QUORUM but only N replica responded success where N < QUORUM What happen to those N nodes that just updated key1? Do they get rollback? In cassandra documentation if using a write consistency level of QUORUM with a replication factor of 3, Cassandra will replicate the write to all nodes in the cluster and wait for acknowledgement from two nodes. If the write fails on one of the nodes but succeeds on the other, Cassandra reports a failure to replicate the write on that node. However, the replicated write that succeeds on the other node is not automatically rolled back. It mentions if some write failed and did not satisfied consistency level, coordinator will return failure, but the data will persist on nodes that have write succeeded But this means strong consistency can never be achieved even if R + W > number of replica as official. documentation suggested Consider the following situation replica number = 5 consistency level write = 3 consistency level read = 3 If a write is attempted , but one nodes succeeds , coordinator will return failure, but that one node will not rollback, so you need a consistency level of 5 in order to achieve strong consistentcy The documentation has conflicting information What am I getting wrong here? submitted by /u/Rough_Source_123 [link] [comments]

  • Translating extended SQL syntax into relational algebra
    by /u/8u3b87r7ot (Database) on February 20, 2024 at 5:41 pm

    I've been going through the CMU courses lately and wanted to experiment writing a basic optimizer. I have a parsed representation of my query and I want to translate it into a relational algebra expression, which can later be optimized into a physical operation tree. I managed to translate basic operations (e.g. WHERE predicates into selections, SELECT items into selections) but I'm stuck with 'extended' SQL syntax such as common table expressions and lateral joins. How do databases typically implement those? Is it even possible to use regular algebra trees for this or should I use bespoke data structures? In particular: for CTEs, my intuition would be to inline each reference but that would force the optimizer to run multiple times on the same CTE? for lateral joins, considering the following example: SELECT * FROM (SELECT 1 id) A, ( (SELECT 2) B JOIN LATERAL (SELECT C ON TRUE ) D; A tree would be └── NAT. JOIN ├── A └── LATERAL JOIN (D) ├── B └── C how can C reference A's columns given that A is higher in the tree? submitted by /u/8u3b87r7ot [link] [comments]

  • How to go about implementing a hash index for my storage?
    by /u/the123saurav (Database) on February 20, 2024 at 6:16 am

    Imagine i have to implement a time series data store where an entry looks like this: {id - 64 bit auto incrementing long, time - 64 bit long, value - 64-512 bit binary, crc - 32 bit, version - 64 bit} Primary key is {time, id} The size of above entry would be between 36B - 92B.My table size would be at max 10GB.One host can be having 100s of table as this is a multi tenant system. So I will have ~ 10GB/36B ~ 300M entries. Now I have following req: Optimize for ingestion esp on tip(current time) which moves forwar Do deduplication based on {id + time + version} to reject lower versions synchronously. Again time here mostly would be tip Have support for fast snapshot of storage for backups Support deletion based on predicate which would be like: Note that duplicates would be rare and hence I believe I would benefit from keeping an index(id + time) in memory and not entire data records. I am evaluating following: Hash/Range based index - I am thinking of a bitcask like storage where i can keep index in memory. Since an index entry would take {16byte for key + 8byte for offset} = 24B, I would need 24B * 300 M ~ 7GB memory for index alone for 1 table which is a lot.Hence I am thinking of a slightly different design though where I will divide my store into N partitions internally on time(say 10) and keep only the bucket(s) which are actively ingesting in memory. Since my most common case is tip ingestion, it will be 1 bucket that would be memory and so my index size goes down by factor of 10. This however adds some complexity in design. Also I believe implementing 4 is tricky if no time predicate is in query and I have to open all buckets. I guess the one way to get around this is to track tombstones separately. LSM based engine - This should be obvious, however it does make sizing the memtable a bit tricky. Since the memtable now stores the whole entry, it means I can have less values in memory. BTree based engine - Thinking of something like Sqlite with primary key as {time + id} (and not {id + time}). However I don;t think it would shine on writes. This howevers offers ability to run complex queries(if needed in future). Anyone wants to guide me here? Edit: Title wrongly says "hash", ignore it submitted by /u/the123saurav [link] [comments]

  • Does PHP recognize MariaDB's utf8mb4_uca1400_ai_ci collation?
    by /u/konstantin1122 (Database) on February 20, 2024 at 4:04 am

    I tried setting my MariaDB server configuration as so: ``` [mysql] default-character-set=utf8mb4 [mysqld] collation-server = utf8mb4_uca1400_ai_ci init-connect='SET NAMES utf8mb4' character-set-server = utf8mb4 ``` And the PHP scripts connecting to the database stopped working and output warnings such as: Warning: mysqli_connect: Server sent charset (0) unknown to the client. Please, report to the developers in [...] Warning: mysqli_connect): (HY000/2054): Server sent charset unknown to the client. Please, report to the developers [...] How can you make PHP work with a database that uses the utf8mb4 uca1400_ai_ci collation (that is based on Unicode 14.0)? submitted by /u/konstantin1122 [link] [comments]

  • Multivalued Attributes: Multiple entries as one attribute, or a drop-down list of multiple attributes? Microsoft SQL Server Management Studio
    by /u/Professional_Lie_130 (Database) on February 20, 2024 at 12:14 am

    I'm a student and I'm designing a simple chen diagram with ERDPlus in class. I started a pretty lengthy debate where it's essentially me vs. my lecturer arguing the definition of multivalued attributes based on just browser search results and I'm feeling kind of arrogant challenging him like this. Naturally he's got more experience than I do. My lecturer states that for a 'Person' entity, 'gender', 'state', 'postcode', etc. are multivalued attributes, because they become a drop-down list when you're designing the database in SQL. ChatGPT, most Google results, Quora etc. state that multivalued attributes are attributes with multiple entries -- it's an attribute that can have more than one value associated with the key of the entity (a 'Person' entity may have multiple phone numbers, multiple emails, multiple emergency contacts, multiple dependent names, etc. ) In my opinion, gender is usually NOT a multivalued attribute - usually a Person entity has just ONE gender entry, be it male, female, or non-binary. Similarly state is NOT multivalued, you just have one single state... Am I wrong? submitted by /u/Professional_Lie_130 [link] [comments]

  • Opinion with ER diagram
    by /u/ibsrQ9a (Database) on February 19, 2024 at 9:12 pm

    I had created this ER diagram for my college assignment.i received this comment from the grader Attributes should not be repeated in a relation and entity. I feel it is not wrong to show attributes to the relation as it provides better understanding of the database. Is my understanding wrong ? submitted by /u/ibsrQ9a [link] [comments]

  • Doing API requests from PGSQL
    by /u/musimati (Database) on February 19, 2024 at 8:57 pm

    Hello, how are you? I've been working with postgREST, postgreSQL, etc. for a few weeks now. PostgREST basically builds a RESTful API from the database, so you have endpoints for each table, functions, etc. Specifically with functions (pl/pgsql), you can make an endpoint execute them. For example, you could handle logic like checking a password, generating a JWT if it's correct, and returning it as a response to a POST request containing the password. Otherwise, it would provide an appropriate response. I've been exploring that you can even make API calls and other things from pl/PGSQL. I was thinking of setting up Google auth, for instance, within the database logic. Does anyone have experience or advice regarding this, especially regarding security, scalability, and resources? Is it safe to make calls from the same database? Any considerations? I know I could implement the logic in a separate API (using Node, Nest, etc.) and have it communicate with postgREST, or even serve both (postgREST and the Node server) behind a reverse proxy like NGINX to have them interconnected or other setups. However, my general idea was to have as few elements as possible. In this case, I would create a frontend in React/Next, and it would only communicate with the postgREST API, which might provide some ease in terms of security by reducing the attack surface, etc. Thanks, submitted by /u/musimati [link] [comments]

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read

#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence
AI Unraveled: AI, ChatGPT, Google Bard, Machine Learning, Data Science, Quiz

zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

DataIsBeautiful DataIsBeautiful is for visualizations that effectively convey information. Aesthetics are an important part of information visualization, but pretty pictures are not the sole aim of this subreddit.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Content is protected !!