How does a database handle pagination?

How does a database handle pagination?

How does a database handle pagination?

How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.

Second, pagination is generally a function of the front-end and/or middleware, not the database layer.

But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.

That may not be the most efficient or effective implementation, though.

How does a database handle pagination?

So how do you propose pagination should be done?

On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.

Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.

What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?

I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.

If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.

With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.

Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:

Find Your AI Dream Job on Mercor

Your next big opportunity in AI could be just a click away!

More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)

It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.

 

Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.

I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.

But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?

If it does the full fetch every time, then it seems quite inefficient.

If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?

something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.

Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.

YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.


From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

And some theory – see this answer to know what is purpose of Pagination pattern

  • Laravel Eloquent Methods: firstOrNew, firstOrCreate, firstOr and updateOrCreate
    by TechSolve Central (Database on Medium) on July 14, 2025 at 7:22 pm

    IntroductionContinue reading on Medium »

  • Whole-Row References in PostgreSQL
    by Tihomir Manushev (Database on Medium) on July 14, 2025 at 7:17 pm

    Simplifying complex table comparisons and streamlining data-to-JSONContinue reading on Medium »

  • Pagination — Is it really that simple?
    by Paritosh pathak (Database on Medium) on July 14, 2025 at 6:55 pm

    Before we jump in — Some motivationContinue reading on Medium »

  • Best Ways to Convert MSSQL to MySQL: A Step-by-Step Guide
    by Dmitry Narizhnykh (Database on Medium) on July 14, 2025 at 6:48 pm

    Migrating from Microsoft SQL Server (MSSQL) to MySQL is a strategic move many organizations consider to optimize costs, leverage…Continue reading on Medium »

  • MySQL to SQL Server Converter: Complete Migration Guide for 2025
    by Dmitry Narizhnykh (Database on Medium) on July 14, 2025 at 6:45 pm

    Migrating from MySQL to SQL Server represents one of the most critical database operations that IT professionals face today.Continue reading on DBConvert Blog »

  • You Think You Know SQL? Prove It with These 15 Challenges
    by Code With Hannan (Database on Medium) on July 14, 2025 at 5:08 pm

    IntroductionContinue reading on Medium »

  • From Ground-Level B-Trees to Atlas Search Heaven — Part 1
    by SanyaAxel (Database on Medium) on July 14, 2025 at 4:48 pm

    Learn how to turn partial matches into performance magicContinue reading on Medium »

  • WiredTiger Corruption in MongoDB: What, Why, and How to Fix It
    by Aman (Database on Medium) on July 14, 2025 at 4:02 pm

    MongoDB, with its high-performance default storage engine WiredTiger, is a powerful NoSQL database trusted by startups and enterprises…Continue reading on Medium »

  • Redis streams: a different take on event-driven
    by /u/der_gopher (Database) on July 14, 2025 at 4:02 pm

    submitted by /u/der_gopher [link] [comments]

  • Help me decide on learning a Database system
    by /u/ProbableBarnacle (Database) on July 14, 2025 at 3:27 pm

    Hello, I am a software engineer with almost 8 years of experience. During this time, I have mostly worked with MySQL, Postgres and Mongo DB. While these three are the most used ones, I think I am at a disadvantage for not knowing other database technologies. I want to widen my skillset by learning another database technology such as wide column, graph and big data technologies. Some of the Databases that I am thinking of are Cassandra, H-Base, Neo4j, Scylla DB, Arango DB I would love to get some feedback on which database I can take on for my self learning, that will add value to my career as a backend engineer, and has a high demand in the job market. Thank you. submitted by /u/ProbableBarnacle [link] [comments]

  • Connecting Multiple Datasources in Spring Boot: A Practical Guide with PostgreSQL (and Beyond)
    by Rajasekar Sambandam (Database on Medium) on July 14, 2025 at 2:22 pm

    In modern enterprise applications, it’s common to interact with multiple databases — especially when dealing with region-specific data…Continue reading on Medium »

  • Real-Time database change tracking in Go: Implementing PostgreSQL CDC
    by /u/der_gopher (Database) on July 14, 2025 at 1:43 pm

    submitted by /u/der_gopher [link] [comments]

  • Top Free ER Diagram Tools for PostgreSQL in 2025
    by Dbschema Pro (Database on Medium) on July 14, 2025 at 1:15 pm

    When working with PostgreSQL, especially on bigger projects or with other people, understanding the structure and relationships in your…Continue reading on DevOps.dev »

  • Poll: Best way to sync MongoDB with Neo4j and ElasticSearch in real-time ? Kafka Connector vs Change Streams vs Microservices ?
    by /u/mohamedheiba (Database) on July 14, 2025 at 6:10 am

    submitted by /u/mohamedheiba [link] [comments]

  • System Design Basics - DB Connection Pools
    by /u/javinpaul (Database) on July 13, 2025 at 3:26 pm

    submitted by /u/javinpaul [link] [comments]

  • Pretending I'm a SQL Server DBA—ChatGPT Is My Mentor Until I Land the Job
    by /u/nsark (Database) on July 13, 2025 at 10:35 am

    Hey folks, I just graduated (computer engineering) with little tech industry experience—mainly ESL teaching and an IoT internship. I live in a challenging region with few tech companies and a language barrier, but I’m determined to break into a data role, ideally as an SQL Server DBA. I’m certified in Power BI and I love working with databases—designing schemas, optimizing performance, and writing complex queries. Since I don’t have a job yet, I decided to “pretend” I’m already a DBA and let ChatGPT guide me like a senior mentor. I asked it to design a scenario-based course that takes someone from junior to “elite” SQL Server DBA. The result was a 6-phase curriculum covering: Health checks, automation & PowerShell scripting Performance tuning using XEvents, Query Store, indexing, etc. High availability & disaster recovery (Always On, log shipping) Security & compliance (TDE, data masking, auditing) Cloud migrations & hybrid architectures (Azure SQL, ASR) Leadership, mentoring, and community engagement Each phase has real-world scenarios (e.g., slow checkout performance, ransomware recovery, DR failovers) and hands-on labs. There's even a final capstone project simulating a 30TB enterprise mess to fix. I've just completed Phase 1, Scenario 1—built a containerized SQL Server instance in Docker, used PowerShell and dbatools to run health checks, restore backups, and establish baselines. It’s tough and pushes me beyond my comfort zone, but I’ve learned more in a few weeks than I did in school. My Questions: If I complete Phases 1 to 3 and document them properly, do you think it’s enough to put on my resume or GitHub to land an entry-level DBA role? Is this kind of self-driven, mentored-by-AI project something that would impress a hiring manager? Any suggestions on showcasing this journey? (blogs, portfolio site, LinkedIn, etc.) Would love feedback from seasoned DBAs or folks who broke into the field unconventionally. Thanks! submitted by /u/nsark [link] [comments]

  • MariaDB vs MSSQL. A case against using MariaDB for enterprise level application.
    by /u/Disastrous_Past_4794 (Database) on July 12, 2025 at 6:49 pm

    submitted by /u/Disastrous_Past_4794 [link] [comments]

  • How do you scale your Timescale DB?
    by /u/squadfi (Database) on July 12, 2025 at 11:30 am

    Long story short I did some digging Citus is not supported and they had multi node feature that they killed. Obviously we can do either master replica reads for a while until you need more write power then you go down sharding route. Any plan for something streamlined? I saw they have blog post on how to scale timescaledb https://www.tigerdata.com/learn/guide-to-postgresql-scaling They seem to go down the continuous aggregation route aka optimization rather than scale in my eyes at least. So anyone had similar issue? What’s your solution guys? submitted by /u/squadfi [link] [comments]

  • What is your team's db development process?
    by /u/lolcrunchy (Database) on July 11, 2025 at 6:53 pm

    At work, we have a server with three databases: dev, test, and prod. We all make changes to the dev db together and when we are happy with the result, we run it in the test db. If that passes, we paste the SQL into a git repo. Our team lead manually updates prod based on updates to the git repo. This feels wrong to me. I inquired about having a copy of SQL Server Developer on our local computers but IT gave a generic "it's a security risk" and said no. Is our existing process fine? Should we be doing something else? I'd like to emphasize that I've already spent a full week on related google searches and gotten very little useful insight. submitted by /u/lolcrunchy [link] [comments]

  • Why there aren’t databases for images, audio and video
    by /u/akhilgod (Database) on July 10, 2025 at 1:34 pm

    submitted by /u/akhilgod [link] [comments]

  • Time to migrate to MariaDB 11.8?
    by /u/OttoKekalainen (Database) on July 9, 2025 at 8:31 pm

    MariaDB 11.8 became generally available in early June and it is the new LTS version production users should move to. What are the experiences of people so far here, is the 11.8.2 stable and safe to migrate to? Or are there any reason to postpone upgrades from older MariaDB 11.4.x to this version? Should people migrating from MySQL 5.7 / 8.0 go to MariaDB 11.4 or is this new 11.8 ready for production use in environments that need to be very stable? submitted by /u/OttoKekalainen [link] [comments]

  • When SIGTERM Does Nothing: A Postgres Mystery
    by /u/saipeerdb (Database) on July 8, 2025 at 6:23 pm

    submitted by /u/saipeerdb [link] [comments]

  • Building Replication-Safe LSM Trees in Postgres
    by /u/philippemnoel (Database) on July 8, 2025 at 4:56 pm

    submitted by /u/philippemnoel [link] [comments]

  • Oracle DB Crash Course?
    by /u/doinkxx (Database) on July 8, 2025 at 12:42 pm

    Hello, I just started an internship and they will assign as tasks within a week. They told us to study oracle database. We already know sql and worked with postgres and microsoft sql server. I want to study the full syntax of oracle + get used to the software and architecture of the database. Does anyone know any short courses / youtube playlists we can try that are good? I want something intermediate that delves into the specific oracle syntax, not just general sql? submitted by /u/doinkxx [link] [comments]

  • How to Design a Relational Database Schema in 2025
    by /u/NoInteraction8306 (Database) on July 8, 2025 at 8:43 am

    submitted by /u/NoInteraction8306 [link] [comments]

  • SQL to NoSQL: Modernizing data access layer with Amazon DynamoDB
    by /u/raghasundar1990 (Database) on July 7, 2025 at 9:22 am

    submitted by /u/raghasundar1990 [link] [comments]

  • Need help picking a DB
    by /u/Vegetable_Play_9821 (Database) on July 7, 2025 at 8:40 am

    Hi Guys, I am currently using CSV files to manage data. But that is not at all efficient, my current config is as follows There is a data folder in which I have raw data files, and then a folder called analysis data with the processed CSVs, I need a database that achieves the following: Easy Python integration Resource-efficient Supports nested structure / semi-structured data Scales decently (~10K files worth of data) Portable submitted by /u/Vegetable_Play_9821 [link] [comments]

  • Markdown , XAML , does it work for DB and serving content or is it stupid
    by /u/Necessary_Tradition5 (Database) on July 4, 2025 at 5:48 pm

    Hey , some context : I'm currently part of a team working on a educational plateform now i'm no webdev and never launched a proper app online , my work is actually on the document side of things , we have hundreds of math docs on markdown to serve for students depending on level,difficulty,chapter,etc,... now my choice was markdown and xaml since it's easy to work with and to parse in webpage. so far all the prototypes i've worked are of course offline so i just access the folder location on my disk but i'm assuming we won't be doing that for 1k+ students everytime they navigate to a different exercice. SO where should i look to be able to tackle : 1- categorizing documents to be able to serve intelligent and personalized content to students based on algorithms 2- do i store names+locations somewhere and the actual files somewhere else ? what services are lightest for the tast ? 3- am i missing something and being very silly ? help set me on the right path please 🙂 4- Thanks in advance <3 submitted by /u/Necessary_Tradition5 [link] [comments]

  • Need help for my ER diagrams pre-exam!
    by /u/ShokoTendoo (Database) on July 4, 2025 at 3:17 pm

    HI guys, basically i have a database exam pretty soon and i'm still here trying to grasp the conceptual design to draw an ER Schema. My university provided me with slides and like 3 exercises, but i have no solution to them (the professor probably reviewed them in class, but i coudn't attend the classes due to some personal problems). I tried using chatgbt and image making to check if my drawn version was right but i think all the ER diagrams that it produced were kinda not right, or some attributes were missing or were actually empty etc.. Is there any software or AI that can actually help me to see if i did my exercises right? This is an example of one of my exercises on the slides: Also, if you guys could give me some tips on how you would personally tackle it, i would gladly appreciate it! https://preview.redd.it/76tviijcjvaf1.png?width=889&format=png&auto=webp&s=055022c223d8ef423b2f28b2ed2bb2743671aaf0 submitted by /u/ShokoTendoo [link] [comments]

  • "Explain normalization… but make it funny.”
    by /u/Various_Candidate325 (Database) on July 3, 2025 at 6:02 am

    I remembered a very ridiculous thing. I Had a behavioral interview last week, and out of nowhere, the interviewer goes: “How would you explain normalization with a sense of humor?” I panicked. My brain was like: uh… make third normal form… funny? All I could think of was:“Normalization is like decluttering your kitchen..If you have three identical spatulas in three drawers, maybe make a single ‘spatula drawer’ and reference it from there.” They chuckled. I’m not sure if that’s because it was actually funny or just tragically nerdy. Afterwards I plugged the question into Beyz interview helper and got some surprisingly helpful variations. It generated one where normalization was compared to organizing your closet (“Don’t repeat outfits in three places, just tag them!”). Another version compared it to splitting your Spotify playlists so you don’t have the same song on 7 different lists. Honestly… not bad for an AI wingwoman. Anyone else ever get thrown a “make databases fun” kind of question? Would love to hear your best DB jokes or metaphors lol I’m collecting them now, just in case.. submitted by /u/Various_Candidate325 [link] [comments]

  • Looking for database programmer
    by /u/Errr797 (Database) on July 2, 2025 at 5:46 pm

    I am looking for a database programmer to make modifications to an existing database. The database is running on a Linux machine. submitted by /u/Errr797 [link] [comments]

  • Suggest me a good SQL database. One that is cheap and fast.
    by /u/BoysenberryLocal5576 (Database) on July 2, 2025 at 4:46 pm

    submitted by /u/BoysenberryLocal5576 [link] [comments]

  • Ingestion pipeline
    by /u/oulipo (Database) on July 2, 2025 at 8:28 am

    I'm curious here, about people who have a production data ingestion pipeline, and in particular for IoT sensor applications, what it is, and whether you're happy with it or what you would change My use case is having 100k's of devices in the field, sending one data point each 10 minutes The current pipeline I imagine would be MQTT(Emqx) -> Redpanda -> Flink (for analysis) -> TimescaleDB submitted by /u/oulipo [link] [comments]

  • Seeking feedback on a new row-level DB auditing tool (built by a DBA)
    by /u/tohar-papa (Database) on July 1, 2025 at 10:08 pm

    Hey r/Database, I'm reaching out to this community because it's one of the few places with a high concentration of people who will immediately understand the problem we're trying to solve. I promise this isn't a sales pitch; we're bootstrapped, pre-revenue and are genuinely looking for expert guidance. The Origin Story (The "Why"): My co-founder was a DBA and architect for military contractors for over 15 years. He ran into a situation where a critical piece of data was changed in a production SQL Server database, and by the time anyone noticed, the logs had rolled, and the nightly backups were useless. There was no way to definitively prove who changed what, when, or what the original value was. It was a nightmare of forensics and finger-pointing. He figured there had to be a better way than relying on complex log parsing or enterprise DAMs that cost a fortune and take months to deploy. What We Built: So, he built this tool which at its core, it does one thing very well: it captures every single row-level change (UPDATE, INSERT, DELETE) in a SQL Server database and writes it to an immutable, off-host log in real-time. Think of it as a perfect, unbreakable data lineage for every transaction. It's designed to answer questions like: "Who changed the price on this product row at 9 PM on Sunday?" "What was the exact state of this customer record before the production bug corrupted it?" "Our senior DBA just left; what kind of critical changes was she making that we need to know about?" It's zero-code to set up and has a simple UI (we call it the Lighthouse) so that you can give your compliance folks or even devs a way to get answers without having to give them direct DB access. The Ask: We Need Your Brutal Honesty We are looking for a small group of experienced DBAs to become our first design partners. We need your unfiltered feedback to help us shape the roadmap. Tell us what's genius, what's garbage, what's missing, and how it would (or wouldn't) fit into your real-world workflow. What's in it for you? Free, unlimited access to the platform throughout the design partner program. A significant, permanent discount if you decide you want to use the product afterward. No obligation at all. You'll have a real impact on the direction of a tool built specifically for the problems you face. An opportunity to get early hands-on experience with a new approach to data auditing. If you've ever had to spend a weekend digging through transaction logs to solve a mystery and wished you had a simpler way, I'd love to chat. How to get in touch: Please comment below or shoot me a DM if you're interested in learning more. I'm happy to answer any and all questions right here in the thread. Thanks for your time and expertise. (P.S. - Right now we are focused exclusively on SQL Server, but support for Postgres and others is on the roadmap based on feedback like yours.) submitted by /u/tohar-papa [link] [comments]

  • Is Supabase good for production?
    by /u/BoysenberryLocal5576 (Database) on July 1, 2025 at 4:50 am

    Hi guys, as the title suggests, is Supabase good for production? And if yes, what kind of plan should I buy? We are currently a team of developers and buying a plan for each member seems costly. Should I opt for a different approach? What should I do in this situation guys? submitted by /u/BoysenberryLocal5576 [link] [comments]

Budget to start a web app built on the MEAN stack

I want to start a web app built on the MEAN stack (mongoDB, express.js, angular, and node.js). How much would it cost me to host this site? What resources are there for hosting websites built on the MEAN stack?

I went through the same questions and concerns and I actually tried a couple of different cloud providers for similar environments and machines.

Web Apps Feed

  1. At Digital Ocean, you can get a fully loaded machine to develop and host at $5 per month (512 MB RAM, 20 GB disk ). You can even get a $10 credit by using this link of mine.[1] It is very easy to sign up and start. Just don’t use their web console to connect to your host. It is slow. I recommend using ssh client to connect and it is very fast.
  2. GoDaddy will charge you around 8$ per month for a similar MEAN stack host (512 MB RAM, 1 core processor, 20 Gb disk ) for your MEAN Stack development.
  3. Azure use bitmani’s mean stack on minimum DS1_V2 machine (1core, 3.5 gB RAM) and your average cost will be $52 per month if you never shut down the machine. The set up is a little bit more complicated that Digital Ocean, but very doable. I also recommend ssh to connect to the server and develop.
  4. AWS also offers Bitmani’s MEAN stack on EC2 instances similar to Azure DS1V2 described above and it is around $55 per month.
  5. Other suggestions

All those solutions will work fine and it all depends on your budget. If you are cheap like me and don’t have a big budget, go with Digital Ocean and start with $10 off with this code.

Basic Gotcha Linux Questions for IT DevOps and SysAdmin Interviews

Some IT DevOps, SysAdmin, Developer positions require the knowledge of basic linux Operating System. Most of the time, we know the answer but forget them when we don’t practice very often. This refresher will help you prepare for the linux portion of your IT interview by answering some gotcha Linux Questions for IT DevOps and SysAdmin Interviews.

Get a $10 credit to have your own linux server for your MEAN STACK development and more. It is only $5 per month for a fully loaded Ubuntu machine.

Latest Linux Feeds

I- Networking:

  1. How many bytes are there in a MAC address?
    48.
    MAC, Media Access Control, address is a globally unique identifier assigned to network devices, and therefore it is often referred to as hardware or physical address. MAC addresses are 6-byte (48-bits) in length, and are written in MM:MM:MM:SS:SS:SS format.
  2. What are the different parts of a TCP packet?
    The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU: … A TCP segment consists of a segment header and a data section.
  3. Networking: Which command is used to initialize an interface, assign IP address, etc.
    ifconfig (interface configuration). The equivalent command for Dos is ipconfig.
    Other useful networking commands are: Ping, traceroute, netstat, dig, nslookup, route, lsof
  4. What’s the difference between TCP and UDP; Between DNS TCP and UDP?
    There are two types of Internet Protocol (IP) traffic. They are TCP or Transmission Control Protocol and UDP or User Datagram Protocol. TCP is connection oriented – once a connection is established, data can be sent bidirectional. UDP is a simpler, connectionless Internet protocol.
    The reality is that DNS queries can also use TCP port 53 if UDP port 53 is not accepted.
    DNS uses TCP for Zone Transfer over port :53.
    DNS uses UDP for DNS Queries over port :53.

  5. What are defaults ports used by http, telnet, ftp, smtp, dns, , snmp, squid?
    All those services are part of the Application level of the TCP/IP protocol.
    http => 80
    telnet => 23
    ftp => 20 (data transfer), 21 (Connection established)
    smtp => 25
    dns => 53
    snmp => 161
    dhcp => 67 (server), 68 (Client)
    ssh => 22
    squid => 3128
  6. How many host available in a subnet (Class B and C Networks)
  7. How DNS works?
    When you enter a URL into your Web browser, your DNS server uses its resources to resolve the name into the IP address for the appropriate Web server.
  8. What is the difference between class A, class B and class C IP addresses?
    Class A Network (/ 8 Prefixes)
    This network is 8-bit network prefix. IP address range from 0.0.0.0 to 127.255.255.255
    Class B Networks (/16 Prefixes)
    This network is 16-bit network prefix. IP address range from 128.0.0.0 to 191.255.255.255Class C Networks (/24 Prefixes)
    This network is 24-bit network prefix.IP address range from 192.0.0.0 to 223.255.255.255
  9. Difference between ospf and bgp?
    The first reason is that BGP is more scalable than OSPF. , and this, normal igp like ospf cannot perform. Generally speaking OSPF and BGP are routing protocols for two different things. OSPF is an IGP (Interior Gateway Protocol) and is used internally within a companies network to provide routing.

II- Operating System
1&1 Web Hosting

  1. How to find the Operating System version?
    $uname -a
    To check the distribution for redhat for example: $cat /etc/redhat –release
  2. How to list all the process running?
    top
    To list java processes, ps -ef | grep java
    To list processes on a specific port:
    netstat -aon | findstr :port_number
    lsof -i:80
  3. How to check disk space?
    df shows the amount of disk space used and available.
    du displays the amount of disk used by the specified files and for each subdirectories.
    To drill down and find out which file is filling up a drive: du -ks /drive_name/* | sort -nr | head
  4. How to check memory usage?
    free or cat /proc/meminfo
  5. What is the load average?
    It is the average sum of the number of process waiting in the queue and the number of process currently executing over the period of 1, 5 and 15 minutes. Use top to find the load average.
  6. What is a load balancer?
    A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.
  7. What is the Linux Kernel?
    The Linux Kernel is a low-level systems software whose main role is to manage hardware resources for the user. It is also used to provide an interface for user-level interaction.
  8. What is the default kill signal?
    There are many different signals that can be sent (see signal for a full list), although the signals in which users are generally most interested are SIGTERM (“terminate”) and SIGKILL (“kill”). The default signal sent is SIGTERM.
    kill 1234
    kill -s TERM 1234
    kill -TERM 1234
    kill -15 1234
  9. Describe Linux boot process
    BIOS => MBR => GRUB => KERNEL => INIT => RUN LEVEL
    As power comes up, the BIOS (Basic Input/Output System) is given control and executes MBR (Master Boot Record). The MBR executes GRUB (Grand Unified Boot Loader). GRUB executes Kernel. Kernel executes /sbin/init. Init executes run level programs. Run level programs are executed from /etc/rc.d/rc*.d
    Mac OS X Boot Process:

    Boot ROMFirmware. Part of Hardware system
    BootROM firmware is activated
    POSTPower-On Self Test
    initializes some hardware interfaces and verifies that sufficient memory is available and in a good state.
    EFI Extensible Firmware Interface
    EFI does basic hardware initialization and selects which operating system to use.
    BOOTX boot.efi boot loader
    load the kernel environment
    Rooting/Kernel The init routine of the kernel is executed
    boot loader starts the kernel’s initialization procedure
    Various Mach/BSD data structures are initialized by the kernel.
    The I/O Kit is initialized.
    The kernel starts /sbin/mach_init
    Run Level mach_init starts /sbin/init
    init determines the runlevel, and runs /etc/rc.boot, which sets up the machine enough to run single-user.
    rc.boot figures out the type of boot (Multi-User, Safe, CD-ROM, Network etc.)
  10. List services enabled at a particular run level
    chkconfig –list | grep 5:0n
    Enable|Disable a service at a specific run level: chkconfig on|off –level 5
  11. How do you stop a bash fork bomb?
    Create a fork bomb by editing limits.conf:
    root hard nproc 512
    Drop a fork bomb as below:
    :(){ :|:& };:
    Assuming you have access to shell:
    kill -STOP
    killall -STOP -u user1
    killall -KILL -u user1
  12. What is a fork?
    fork is an operation whereby a process creates a copy of itself. It is usually a system call, implemented in the kernel. Fork is the primary (and historically, only) method of process creation on Unix-like operating systems.
  13. What is the D state?
    D state code means that process is in uninterruptible sleep, and that may mean different things but it is usually I/O.

III- File System

  1. What is umask?
    umask is “User File Creation Mask”, which determines the settings of a mask that controls which file permissions are set for files and directories when they are created.
  2. What is the role of the swap space?
    A swap space is a certain amount of space used by Linux to temporarily hold some programs that are running concurrently. This happens when RAM does not have enough memory to hold all programs that are executing.
  • What is the role of the swap space?
    A swap space is a certain amount of space used by Linux to temporarily hold some programs that are running concurrently. This happens when RAM does not have enough memory to hold all programs that are executing.
  • What is the null device in Linux?
    The null device is typically used for disposing of unwanted output streams of a process, or as a convenient empty file for input streams. This is usually done by redirection. The /dev/null device is a special file, not a directory, so one cannot move a whole file or directory into it with the Unix mv command.You might receive the “Bad file descriptor” error message if /dev/null has been deleted or overwritten. You can infer this cause when file system is reported as read-only at the time of booting through error messages, such as“/dev/null: Read-only filesystem” and “dup2: bad file descriptor”.
    In Unix and related computer operating systems, a file descriptor (FD, less frequently fildes) is an abstract indicator (handle) used to access a file or other input/output resource, such as a pipe or network socket.
  • What is a inode?
    The inode is a data structure in a Unix-style file system that describes a filesystem object such as a file or a directory. Each inode stores the attributes and disk block location(s) of the object’s data.

IV- Databases

  1. What is the difference between a document store and a relational database?
    In a relational database system you must define a schema before adding records to a database. The schema is the structure described in a formal language supported by the database and provides a blueprint for the tables in a database and the relationships between tables of data. Within a table, you need to define constraints in terms of rows and named columns as well as the type of data that can be stored in each column.In contrast, a document-oriented database contains documents, which are records that describe the data in the document, as well as the actual data. Documents can be as complex as you choose; you can use nested data to provide additional sub-categories of information about your object. You can also use one or more document to represent a real-world object.
  2. How to optimise a slow DB?
    • Rewrite the queries
    • Change indexing strategy
    • Change schema
    • Use an external cache
    • Server tuning and beyond
  3. How would you build a 1 Petabyte storage with commodity hardware?
    Using JBODs with large capacity disks with Linux in a distributed storage system stacking nodes until 1PB is reached.
    JBOD (which stands for “just a bunch of disks”) generally refers to a collection of hard disks that have not been configured to act as a redundant array of independent disks (RAID) array.
    JBOD

V- Scripting

  1. What is @INC in Perl?
    The @INC Array. @INC is a special Perl variable that is the equivalent to the shell’s PATH variable. Whereas PATH contains a list of directories to search for executables, @INC contains a list of directories from which Perl modules and libraries can be loaded.
  2. Strings comparison – operator – for loop – if statement
  3. Sort access log file by http Response Codes
    Via Shell using linux commands
    cat sample_log.log | cut -d ‘”‘ -f3 | cut -d ‘ ‘ -f2 | sort | uniq -c | sort -rn
  4. Sort access log file by http Response Codes Using awk
    awk ‘{print $9}’ sample_log.log | sort | uniq -c | sort -rn
  5. Find broken links from access log file
    awk ‘($9 ~ /404/)’ sample_log.log | awk ‘{print $7}’ sample_log.log | sort | uniq -c | sort -rn
  6. Most requested page:
    awk -F\” ‘{print $2}’ sample_log.log | awk ‘{print $2}’ | sort | uniq -c | sort -r
  7. Count all occurrences of a word in a file
    grep -o “user” sample_log.log | wc -w

Learn more at http://career.guru99.com/top-50-linux-interview-questions/

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.

Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:

Find Your AI Dream Job on Mercor

Your next big opportunity in AI could be just a click away!

Real Time Linux Jobs

Install and run your first noSQL MongoDB on Mac OSX

Amazon SQL vs NoSQL

Install and run your first noSQL MongoDB on Mac OSX

Classified as a NoSQL database, MongoDB is an open source, document-oriented database designed with both scalability and developer agility in mind. Instead of storing your data in tables and rows as you would with a relational database, in MongoDB you store JSON-like documents with dynamic schemas; This makes the integration of data in certain types of application easier and faster.
Why?
MongoDB can help you make a difference to the business. Tens of thousands of organizations, from startups to the largest companies and government agencies, choose MongoDB because it lets them build applications that weren’t possible before. With MongoDB, these organizations move faster than they could with relational databases at one tenth of the cost. With MongoDB, you can do things you could never do before.

    1. Install Homebrew
      $ /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”
      Homebrew installs the stuff you need that Apple didn’t.
      $ brew install wget
    2. Install MongoDB
      $ brew install mongodb
    3. Run MongoDB
      Create the data directory: $ mkdir -p /data/db
      Set permissions for the data directory:$ chown -R you:yourgroup /data/db then chmod -R 775 /data/db
      Run MongoDB (as non root): $ mongod
    4. Begin using MongoDB.(MongoDB will be running as soon as you ran mongod above)Open another terminal and run: mongo

Install and run your first noSQL MongoDB on Mac OSX

References: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-os-x/


What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.

Watch a video or find out more here.

Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.

Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.

Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.

Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.

Google Workspace Business Standard Promotion code for the Americas 63F733CLLY7R7MM 63F7D7CPD9XXUVT 63FLKQHWV3AEEE6 63JGLWWK36CP7WM
Email me for more promo codes

Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz

Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals

Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz

Skin Stem Cell Serum

Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel

Can AI Really Predict Lottery Results? We Asked an Expert.

Ace the 2025 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2025 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss human health

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)