You can translate the content of this page by selecting a language in the select box.
How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.
Second, pagination is generally a function of the front-end and/or middleware, not the database layer.
But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.
That may not be the most efficient or effective implementation, though.

So how do you propose pagination should be done?
On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.
Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.
What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?
I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.
If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.
With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.
More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)
It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.
If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.

Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.
I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.
But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?
If it does the full fetch every time, then it seems quite inefficient.
If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?
something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.
From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.
And some theory – see this answer to know what is purpose of Pagination pattern
- Absolute Novice Seeking Database Adviceby /u/rentingumbrellas (Database) on May 31, 2023 at 7:45 am
Hello everyone, I am part of a project cataloguing information about diseases in medieval manuscripts. I work with four other colleagues, all of which are not even remotely savvy when it comes to anything computer related. Another colleague recommended Microsoft Access which would have suited our needs except that part of our team have Macs. I am currently using Apache OpenOffice but I worry it might be too complicated for them. Basically, I am looking for something similar, with a straightforward interface (I plan to set up the fields myself), and one that will allow multiple users to add information. In theory, I know Apache OpenOffice allows for collaboration, but it seems challenging to set up on my colleagues' computers remotely. I am very new to all of this and I apologise if this seems very straightforward. I have been researching options for months but I can never quite find what I am looking for. I should note that we could pay a little if needed. Thank you for your help! submitted by /u/rentingumbrellas [link] [comments]
- Databases, Data Warehouses & Data Lakes — The Best Data Storage for Your Businessby Jeffrey Agadumo (Database on Medium) on May 30, 2023 at 10:15 pm
By the end of this article, you’ll be able to choose between databases, data warehouses, and data lakes and take your data storage game to…Continue reading on Datameer inc »
- Últimos Jogadores Atualizados — 30/05/2023by Tudo pelo Futebol (Database on Medium) on May 30, 2023 at 8:14 pm
Continue reading on Medium »
- Últimos Jugadores Actualizados — 30/05/2023by Todo por el Fútbol (Database on Medium) on May 30, 2023 at 8:14 pm
Continue reading on Medium »
- Last Updated Players — 05/30/2023by Everything for Football (Database on Medium) on May 30, 2023 at 8:14 pm
Continue reading on Medium »
- I've never created a production database from scratch and am wondering how much trouble it would be to transition a one-to-one relationship to a one-to-many relationship if I determine at some point that the latter is required.by /u/IanAbsentia (Database) on May 30, 2023 at 7:14 pm
The title says it all, really. In its inception, I could see my application permitting users to have one of a thing (sort of how Shopify allows users to manage only a single shop--one user to one shop). What I'd be curious to know is--what is the cost of getting this wrong out the gate? How difficult/time-consuming is it to correct? submitted by /u/IanAbsentia [link] [comments]
- PostgREST: Building Robust RESTful APIs with PostgreSQL Made Easyby Lentreo (Database on Medium) on May 30, 2023 at 6:39 pm
Define the base URL of your PostgREST APIContinue reading on Medium »
- Jogadores Mais Vistos do Dia — 30/05/2023by Tudo pelo Futebol (Database on Medium) on May 30, 2023 at 5:20 pm
Continue reading on Medium »
- Jugadores Más Vistos del Día — 30/05/2023by Todo por el Fútbol (Database on Medium) on May 30, 2023 at 5:20 pm
Continue reading on Medium »
- Most Viewed Players of the Day — 05/30/2023by Everything for Football (Database on Medium) on May 30, 2023 at 5:20 pm
Continue reading on Medium »
- “DATA ANALYTICS”by Umer Khatab (Database on Medium) on May 30, 2023 at 5:16 pm
Data analytics is the process of examining, transforming, and analyzing data to extract insights, identify patterns, and make informed…Continue reading on Medium »
- Day 6 & 7 of learning Data Scienceby Laraib Ansari (Database on Medium) on May 30, 2023 at 4:28 pm
Day 6 & 7Continue reading on Medium »
- The 10 Concepts of Relational Model in DBMSby /u/usemynotes (Database) on May 30, 2023 at 2:36 pm
submitted by /u/usemynotes [link] [comments]
- Snowflake Clustering 101—A Beginner's Guideby /u/pramit_marattha (Database) on May 30, 2023 at 4:22 am
submitted by /u/pramit_marattha [link] [comments]
- MariaDB helpby /u/SinnermanKGB (Database) on May 29, 2023 at 2:30 pm
Hello everyone, i have one question regarding MariaDB. What is the best practice of replicating MariaDB database or entire VM containing database (Windows 2022 server standard)? I have read several articles about various 3rd party software's that can simplify this (Percona, Veeam etc.) but dont know what to choose... I am using Veeam atm for backups but doesnt look it can be used for failover with real time replication. Idea is that if VM or entire server stop to work we have replica on another node and data is not lost. Failover cluster is not an option, sadly... Thanks in advance submitted by /u/SinnermanKGB [link] [comments]
- 289k Medium Articles at Your Fingertips! 🚀by /u/medium-api (Database) on May 29, 2023 at 9:25 am
submitted by /u/medium-api [link] [comments]
- A Comprehensive Guide to Vector Databasesby /u/pmz (Database) on May 29, 2023 at 8:27 am
submitted by /u/pmz [link] [comments]
- I need help making this ER diagram into a relational mapping, is there any sites? thanksby /u/Matheo11968 (Database) on May 28, 2023 at 2:48 pm
submitted by /u/Matheo11968 [link] [comments]
- Why didn't limit function printed the rows?by /u/Coc_Alexander (Database) on May 28, 2023 at 1:22 pm
The first statement can print both the rows requested to print and no of rows. Why didn't it? submitted by /u/Coc_Alexander [link] [comments]
- Has anyone tried Mindsdb? I’m tasked with adding some prediction to our postgres database and wanted to get some opinions on this approachby /u/Tricky-Ad144 (Database) on May 27, 2023 at 10:56 pm
submitted by /u/Tricky-Ad144 [link] [comments]
- Suggestions to build a customer database for an auto shop?by /u/Morgoroth37 (Database) on May 27, 2023 at 8:18 pm
I teach automotive classes at a hs and I'm trying to build some type of database to track repairs and customer info for when we do live work. Ideally I would like some sort of form for each repair that would link history or something. I would also like to be able to maybe hide personal info on the form so students don't have access to phone numbers etc. There are some great programs that already do this, but IT has a lot of security protocol that make getting approval very difficult. I have Microsoft Access but I'm open to using other programs. submitted by /u/Morgoroth37 [link] [comments]
- Moving away from massive spreadsheet?by /u/GuillerminaCharity (Database) on May 26, 2023 at 5:50 pm
I have a file that I layer in daily transactional data from a csv into because automated flow has been an issue. The file is getting too big as I add more days. Where can I be storing these multiple tabs of large data instead of Excel? I don’t use Excel for analysis. I link the workbook to Tableau. I just need to store multiple tabs and reference a few lookup tables submitted by /u/GuillerminaCharity [link] [comments]
- Database 19c: How do I figure out why these log files are taxing the RAID 10 array?by /u/Starmage21 (Database) on May 26, 2023 at 2:37 pm
submitted by /u/Starmage21 [link] [comments]
- MySQL Workbench use with MariaDBby /u/RandomXUsr (Database) on May 25, 2023 at 6:38 pm
I'd like to manage my users with mariadb and MySQL Workbench, while learning. I've noticed that MySQL Workbench has a big fat notice that it's "Not Supported" which is a bit of a concern. I'm using Mariadb version 15.1, and MySQL Workbench version 8.0.33 I'll generally be on the latest versions b/c I'm on Arch. Although, I could stay on these packages or downgrade if needed. What are your thoughts on using these two together? Should I consider another GUI? I ask, as the books I have reference MySQL Workbench. Thanks in advance. submitted by /u/RandomXUsr [link] [comments]
- HOW TO: Create Snowflake Python Worksheetsby /u/pramit_marattha (Database) on May 25, 2023 at 10:33 am
submitted by /u/pramit_marattha [link] [comments]
- DB-Engines Ranking: Top Graph Databases You Should Useby /u/Zealousideal_Plan591 (Database) on May 25, 2023 at 8:54 am
submitted by /u/Zealousideal_Plan591 [link] [comments]
- Is Anchor Modeling a better option than Data Vault?by /u/Sleeping_banana56 (Database) on May 24, 2023 at 3:04 am
I am currently working on a paper regarding comparing multiple data models and architecture with each other regarding GDPR compliance. In my research I ended up with my 3 best options being Data Lake, Data Vault and Anchor Modeling for my specific situation. I am also working as an intern and my company wants to use my paper as a reference as how they want to progress in the future. Therefor I am focusing on a lot of difference elements when comparing each model like security, GDPR compliance, agility, ease of implementation and maintenance and budget friendliness. After some research I was actually more leaning towards Data Vault thinking the implementation is a bit more easy than AM and on the long term also more budget friendly that DL. For a second opinion I asked ChatGPT and he suggested that AM would be a better solution because of its ease of implementation and maintenance. It’s also more secure especially on GDPR level. I would love to hear a second opinion on this because I don’t know how much I can trust Chat GPT on this. Thank you for reading if you’ve read this all 😅 submitted by /u/Sleeping_banana56 [link] [comments]
- Advice needed for building a database project - Economic and social statisticsby /u/northx49 (Database) on May 23, 2023 at 10:03 pm
I've recently embarked on a project that involves collecting economic and social statistics from a government website, and I could really use your guidance and expertise. Before diving into my questions, let me share a bit about my background and the project itself. I've always been intrigued by programming, but I struggled to find the motivation to complete online classes in the past. However, I've come to realize that finding a real-world problem to address with programming can be a game-changer. A few weeks ago, I stumbled upon a problem that really caught my interest. Ever since I was young, I've had a passion for mathematics, working with data, and organizing information. I believe these skills can be immensely valuable in our daily lives. Now, onto the project itself. I'm not attempting to tackle it alone because I understand that learning everything within a short period can be overwhelming. Nevertheless, I do want to focus on the data aspect of the project. Essentially, I'm building a database with the following objectives: Collect economic and social statistics from a government website. Choose between manual data collection using a script (already tested successfully) or utilizing an API. Clean and transform the data, which is available in monthly/quarterly and annual spreadsheets and requires some attribute augmentation. Create a user-friendly website where users can select an industry and view relevant economic and social statistics. Think of it as a tool for foreign investors, providing them with key information like median and average salary, imports/exports, GDP, inflation, and more. Users should also have the option to select specific indicators. Display goods produced within each industry, with assigned goods IDs that correlate with government data sources. Implement dynamic graphs to visualize the data, aiming for a high-quality user interface without relying on tools like PowerBI. Now, I have a few questions that I hope you can help me with: Which database management system would you recommend for this project? I'm open to suggestions based on factors like scalability, performance, and ease of use. Do you think using cloud storage is a viable option for this project? I'm considering the potential benefits it may offer. For smaller items such as goods IDs, would utilizing an API be a feasible approach? I'm open to any advice, critiques, or recommendations you may have. As a beginner, I'm eager to learn and improve. If you can suggest any GitHub resources, classes, or YouTube videos that might be helpful, I would greatly appreciate it. Thank you all in advance for your time and support! submitted by /u/northx49 [link] [comments]
- Database seeding - fixed/static record IDsby /u/IanAbsentia (Database) on May 23, 2023 at 6:37 pm
G'day, folks! It looks like I will need to seed my database with some values that I know will be required in order for my application to function properly, and I've been mulling over whether it is considered standard practice to furnish static ID values for seed data. To further clarify, I anticipate having a roles table and each role having a static ID value equal to the actual role name, albeit in all CAPS. Does this seem reasonable? submitted by /u/IanAbsentia [link] [comments]
- physical entity relational modelby /u/Agreeable-Star-4160 (Database) on May 23, 2023 at 6:21 pm
hey guys, me and my friend wanted to represent the following data in a physical entity relation model that is normalized in (1NF,2NF and 3NF) but we've been arguing over the past 2 hours and cant find a right solution. can anyone help? https://preview.redd.it/sfax4naeim1b1.png?width=1818&format=png&auto=webp&s=507851a4236e17cb1633f68c3a981eb2a7f23db4 submitted by /u/Agreeable-Star-4160 [link] [comments]
- I’m doing a project using Aerospike and their Database service.by /u/avgjoessports (Database) on May 23, 2023 at 2:43 pm
Does anyone know how to create a new database using aerospike, and adding a csv file to said database? submitted by /u/avgjoessports [link] [comments]
- Multi-Databases across Multiple Servers - MySQLby /u/short_herpes (Database) on May 23, 2023 at 2:10 pm
Hi. I was just curious how I could query multiple database tables from different databases hosted on different MySQL servers. Here's what I could think of but I'm not sure if the 2nd on is possible. Sinking the databases into a single server/ lakehouse/ warehouse and then querying it (defeats the entire idea) Putting a proxy and in some way splitting up the request packets to collect the required data from each of the respective data sources. I'm pretty sure that this problem has been solved but I just could not figure out/ find and implementation for the same submitted by /u/short_herpes [link] [comments]
- How do you choose between relational database and non relational database?by /u/Coc_Alexander (Database) on May 23, 2023 at 1:29 pm
Are relational and non relational database used in different scenario or either of them can work for each scenario efficiently? What do programmers do, do they learn both of them and apply either when needed? submitted by /u/Coc_Alexander [link] [comments]
- SQL Lite questionby /u/Brumbies5 (Database) on May 23, 2023 at 1:03 pm
I have made a database in db browser SQLlite with 10 queries. I have the .db file which has the database and a .sqpbro file that has the queries. Is there anyway to merge these files into one .db file so that it has both the database and queries in one file? submitted by /u/Brumbies5 [link] [comments]
- Managing Data Residency - great demo of a sample architecture where location-based routing happens at 2 stages. Routing done thanks to Apache ShardingSphere & ApacheAPISIXby /u/y2so (Database) on May 23, 2023 at 8:14 am
submitted by /u/y2so [link] [comments]