DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)
Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com
AI Jobs and Career
I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
- Full Stack Engineer [$150K-$220K]
- Software Engineer, Tooling & AI Workflow, Contract [$90/hour]
- DevOps Engineer, India, Contract [$90/hour]
- More AI Jobs Opportunitieshere
| Job Title | Status | Pay |
|---|---|---|
| Full-Stack Engineer | Strong match, Full-time | $150K - $220K / year |
| Developer Experience and Productivity Engineer | Pre-qualified, Full-time | $160K - $300K / year |
| Software Engineer - Tooling & AI Workflows (Contract) | Contract | $90 / hour |
| DevOps Engineer (India) | Full-time | $20K - $50K / year |
| Senior Full-Stack Engineer | Full-time | $2.8K - $4K / week |
| Enterprise IT & Cloud Domain Expert - India | Contract | $20 - $30 / hour |
| Senior Software Engineer | Contract | $100 - $200 / hour |
| Senior Software Engineer | Pre-qualified, Full-time | $150K - $300K / year |
| Senior Full-Stack Engineer: Latin America | Full-time | $1.6K - $2.1K / week |
| Software Engineering Expert | Contract | $50 - $150 / hour |
| Generalist Video Annotators | Contract | $45 / hour |
| Generalist Writing Expert | Contract | $45 / hour |
| Editors, Fact Checkers, & Data Quality Reviewers | Contract | $50 - $60 / hour |
| Multilingual Expert | Contract | $54 / hour |
| Mathematics Expert (PhD) | Contract | $60 - $80 / hour |
| Software Engineer - India | Contract | $20 - $45 / hour |
| Physics Expert (PhD) | Contract | $60 - $80 / hour |
| Finance Expert | Contract | $150 / hour |
| Designers | Contract | $50 - $70 / hour |
| Chemistry Expert (PhD) | Contract | $60 - $80 / hour |
How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.
Second, pagination is generally a function of the front-end and/or middleware, not the database layer.
But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.
That may not be the most efficient or effective implementation, though.

So how do you propose pagination should be done?
On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.
Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Find Your AI Dream Job on Mercor
Your next big opportunity in AI could be just a click away!
What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?
I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.
If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.
With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)
It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.
AI Jobs and Career
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.
I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.
But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?
If it does the full fetch every time, then it seems quite inefficient.
If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?
something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.
From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.
And some theory – see this answer to know what is purpose of Pagination pattern
- Modern Mobile Apps, Modern Databases: A Forensic Case Study of Realm in the Upside Appby Daniel Addai (Database on Medium) on January 22, 2026 at 5:50 pm
How Modern Mobile Apps Store Data: Extracting Financial Artifacts from a Realm Database Using Open-Source Forensic ToolsContinue reading on Medium »
- How Netflix Quietly Built a Real-Time Graph at Internet Scaleby chiranji sahithi (Database on Medium) on January 22, 2026 at 5:00 pm
Netflix today looks very different from the company that once mailed DVDs. Alongside streaming, it now runs mobile games, live events, and…Continue reading on Medium »
- The Ultimate Forex Email List: Your Master Key to the $7.5 Trillion-a-Day Marketby fXData Base (Database on Medium) on January 22, 2026 at 4:52 pm
In the high-velocity world of foreign exchange, where fortunes are made on pip movements and market sentiment shifts in milliseconds…Continue reading on Medium »
- Why Database Transactions Lie Under Load (Spring Boot Reality)by Lakshika (Database on Medium) on January 22, 2026 at 4:03 pm
Transactions promise safety.Continue reading on Stackademic »
- Releem 2025 Recap: Building a Database Advisor for Developer Teamsby Roman Agabekov (Database on Medium) on January 22, 2026 at 3:26 pm
Releem 2025 Recap: Building a Database Advisor for Developer TeamsContinue reading on Releem »
- Redis’e Hoş Geldinby Yusuf Osmanoğlu (Database on Medium) on January 22, 2026 at 1:38 pm
1. Redis Neden Ortaya Çıktı?Continue reading on Medium »
- A 7-Step Guide to Modern Data Architecture for Executivesby Chaida Kapfunde (Database on Medium) on January 22, 2026 at 11:41 am
Artificial Intelligence is the engine of the future, but data is the fuel. If you pour dirty, stagnant fuel into a Ferrari, it won’t get…Continue reading on Medium »
- Multilingual Colossal Clean Crawled Corpus: Pre-Training and Pre-Trained — Case Studyby Journal of Landing Across Linguistic Foreground (Database on Medium) on January 22, 2026 at 11:27 am
mC4: The Multilingual Colossal Clean Crawled CorpusContinue reading on Medium »
- When persisting local application state, where do databases start to feel stretched?by /u/DetectiveMindless652 (Database) on January 22, 2026 at 11:16 am
I’m curious how people are using databases to persist application or system state that needs to survive restarts. In a lot of systems, especially stateful services or long-running processes, a local database ends up being the obvious solution for durability. SQLite, Postgres, or something embedded usually works, and in many cases it’s fast enough and simple enough to justify. What I’m trying to understand better is where people feel that approach starts to bend. For example, are you mostly treating the database as a simple state store, or does it end up carrying additional responsibilities like caching, fast repeated reads, or startup reconstruction? Do restart times, cold reads, or access patterns ever become noticeable, or does the database continue to feel like the right abstraction all the way through? I’ve seen setups where the database is technically correct but the system ends up layered with extra logic to compensate for warmup time, query overhead, or repeated access to the same state. Other times, teams just accept rebuild costs as part of the lifecycle and move on. I’m asking because we’ve been looking closely at this boundary between “a database is fine” and “this feels like the wrong abstraction”, particularly for local or performance-sensitive systems. We’re building something in this space and would be happy to share it with anyone interested, but the main goal here is understanding how database folks think about this trade-off in practice. Would love to hear where databases have worked well for you, and where you’ve felt friction, if at all. submitted by /u/DetectiveMindless652 [link] [comments]
- Database Connectivity for AI Agents: SQL Server, Redis, and PostgreSQLby Faizanarif (Database on Medium) on January 22, 2026 at 10:40 am
When building a production-ready AI agent, model intelligence alone isn’t enough.Continue reading on Medium »
- The Day MongoDB Started Bleeding Databy K. M. Shehan (Database on Medium) on January 22, 2026 at 9:26 am
The One Time ‘It Works on My Machine’ was a DisasterContinue reading on Medium »
- I just found out there are 124 keywords in Sqlite. I wonder if anyone here knows all of them. Would be cool.by /u/No-Security-7518 (Database) on January 22, 2026 at 6:41 am
EDIT: sorry, the total number is actually 147. Here's a list. Which ones appear entirely unfamiliar to you? ABORT ACTION ADD AFTER ALL ALTER ANALYZE AND AS ASC ATTACH AUTOINCREMENT BEFORE BEGIN BETWEEN BY CASCADE CASE CAST CHECK COLLATE COLUMN COMMIT CONFLICT CONSTRAINT CREATE CROSS CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DATABASE DEFAULT DEFERRABLE DEFERRED DELETE DESC DETACH DISTINCT DO DROP EACH ELSE END ESCAPE EXCEPT EXCLUDE EXCLUSIVE EXISTS EXPLAIN FAIL FILTER FIRST FOLLOWING FOR FOREIGN FROM FULL GENERATED GLOB GROUP HAVING IF IGNORE IMMEDIATE IN INDEX INDEXED INITIALLY INNER INSERT INSTEAD INTERSECT INTO IS ISNULL JOIN KEY LEFT LIKE LIMIT MATCH MATERIALIZED NATURAL NO NOT NOTHING NOTNULL NULL NULLS OF OFFSET ON OR ORDER OTHERS OUTER OVER PARTITION PLAN PRAGMA PRIMARY QUERY RAISE RECURSIVE REFERENCES REGEXP REINDEX RELEASE RENAME REPLACE RESTRICT RETURNING RIGHT ROLLBACK ROW ROWS SAVEPOINT SELECT SET TABLE TEMP TEMPORARY THEN TO TRANSACTION TRIGGER UNION UNIQUE UPDATE USING VACUUM VALUES VIEW VIRTUAL WHEN WHERE WINDOW WITH WITHOUT FIRST FOLLOWING PRECEDING UNBOUNDED TIES DO FILTER EXCLUDE submitted by /u/No-Security-7518 [link] [comments]
- B-tree comparison functionsby /u/kristian54 (Database) on January 21, 2026 at 9:30 pm
submitted by /u/kristian54 [link] [comments]
- Sales records: snapshot table vs product reference best practice?by /u/Elegant-Drag-7141 (Database) on January 21, 2026 at 3:22 am
I’m working on a POS system and I have a design question about sales history and product edits. Currently: Product table (name, price, editable) SaleDetail table with ProductId If a product’s name or price changes later, old sales would show the updated product data, which doesn’t seem correct for historical or accounting purposes. So the question is: Is it best practice to store a snapshot of product data at the time of sale? (e.g. product name, unit price, tax stored in SaleDetail, or in a separate snapshot table) More specifically: Should I embed snapshot fields directly in SaleDetail? Or create a separate ProductSnapshot (or version) table referenced by SaleDetail? Does this approach conflict with normalization, or is it considered standard for immutable records? Thanks! submitted by /u/Elegant-Drag-7141 [link] [comments]
- January 27, 1pm ET: PostgreSQL Query Performance Monitoring for the Absolute Beginnerby /u/linuxhiker (Database) on January 20, 2026 at 9:26 pm
submitted by /u/linuxhiker [link] [comments]
- Unconventional PostgreSQL Optimizationsby /u/be_haki (Database) on January 20, 2026 at 6:55 pm
submitted by /u/be_haki [link] [comments]
- Is anyone here working with large video datasets? How do you make them searchable?by /u/YiannisPits91 (Database) on January 20, 2026 at 5:29 pm
I’ve been thinking a lot about video as a data source lately. With text, logs, and tables, everything is easy to index and query. With video… it’s still basically just files in folders plus some metadata. I’m exploring the idea of treating video more like structured data — for example, being able to answer questions like: “Show me every moment a person appears” “Find all clips where a car and a person appear together” “Jump to the exact second where this word was spoken” “Filter all videos recorded on a certain date that contain a vehicle” So instead of scrubbing timelines, you’d query a timeline. I’m curious how people here handle large video datasets today: - Do you just rely on filenames + timestamps + tags? - Are you extracting anything from the video itself (objects, text, audio)? - Has anyone tried indexing video content into a database for querying? submitted by /u/YiannisPits91 [link] [comments]
- What the hell is wrong with my codeby /u/Redd1tRat (Database) on January 19, 2026 at 2:00 am
So I'm using MySQL workbench and spent almost the whole day trying to find out why this is not working. submitted by /u/Redd1tRat [link] [comments]
- Why is there no other (open source) database system that has (close to) the same capabilities of MSSQLby Database on January 18, 2026 at 6:33 pm
I did a bit of research about database encryption and it seems like MSSQL has the most capabilities in that area (Column level keys, deterministic encryption for queryable encryption, always encrypted capabilities (Intel SGX Enclave stuff) It seems that there are no real competitors in the open source area - the closest I found is pgcrypto for Postgres but it seems to be limited to encryption at rest? I wonder why that is the case - is it that complicated to implement something like that? Is there no actual need for this in real world scenarios? (aka is the M$ stuff just snakeoil?) [link] [comments]
- I built a secure PostgreSQL client for iOS & Android (Direct connection, local-only)by /u/tobelyan (Database) on January 18, 2026 at 7:31 am
Hi r/Database, i wanted to share a tool i built because i kept facing a common problem: receiving an urgent alert while out of the office - on vacation or at dinner -without a laptop nearby. i needed a way to quickly check the database, run a diagnostic query, or fix a record using just my phone. i built PgSQL Visual Manager for my own use, but realized other developers might need it too. Security First (How it works) i know using a mobile client for DB access requires trust, so here is the architecture: 100% Local: there is no backend service. We cannot see your data. Direct Connection: The app connects directly from your device to your PostgreSQL server (supports SSL and SSH Tunnel). Encrypted Storage: All passwords are stored using the device's native secure storage (Keychain on iOS, Encrypted Shared Preferences on Android). Core Functionality is isn't a bloated enterprise suite; it's a designed for emergency fixes and quick checks: Emergency Access Visual CRUD Custom SQL Table Inspector Data Export it is built by developers, for developers. i'd love to hear your feedbacks. submitted by /u/tobelyan [link] [comments]
- Best stack for building a strictly local, offline-first internal database tool for NPO?by /u/No-Wrongdoer1409 (Database) on January 17, 2026 at 11:02 pm
I'm a high school student with no architecture experience volunteering to build an internal management system for a non-profit. They need a tool for staff to handle inventory, scheduling, and client check-ins. Because the data is sensitive, they strictly require the entire system to be self-hosted on a local server with absolutely zero cloud dependency. I also need the architecture to be flexible enough to eventually hook up a local AI model in the future, but that's a later problem. Given that I need to run this on a local machine and keep it secure, what specific stack (Frontend/Backend/Database) would you recommend for a beginner that is robust, easy to self-host, and easy to maintain? submitted by /u/No-Wrongdoer1409 [link] [comments]
- Efficient storage and filtering of millions of products from multiple users – which NoSQL database to use?by /u/Notoa34 (Database) on January 16, 2026 at 9:08 pm
Hi everyone, I have a use case and need advice on the right database: ~1,000 users, each with their own warehouses. Some warehouses have up to 1 million products. Data comes from suppliers every 2–4 hours, and I need to update the database quickly. Each product has fields like warehouse ID, type (e.g., car parts, screws), price, quantity, last update, tags, labels, etc. Users need to filter dynamically across most fields (~80%), including tags and labels. Requirements: Very fast insert/update, both in bulk (1000+ records) and single records. Fast filtering across many fields. No need for transactions – data can be overwritten. Question: Which database would work best for this? How would you efficiently handle millions of records every few hours while keeping fast filtering? OpenSearch ? MongoDB ? Thanks! submitted by /u/Notoa34 [link] [comments]
- Update: Unisondb log‑native DB with Raft‑quorum writes and ISR‑synced edgesby /u/ankur-anand (Database) on January 16, 2026 at 7:44 pm
I've been building UnisonDB, a log native database in Go, for the past several months. The Goal is to support ISR-based replication to thousands of node effectivetly for local states and reads. Just added the support for Raft‑quorum writes on the server tier in the unisondb. Writes are committed by a Raft quorum on the write servers (if enabled); read‑only edge replicas/relayers stay ISR‑synced. https://preview.redd.it/hyy2nrgulrdg1.png?width=1398&format=png&auto=webp&s=654c0d615a88a6e0e4e58f2a53e6f17fb3c8fce5 Github: https://github.com/ankur-anand/unisondb submitted by /u/ankur-anand [link] [comments]
- Storing resume content?by /u/East_Sentence_4245 (Database) on January 16, 2026 at 4:41 pm
My background: I'm a sql server DBA and most of the data I work with is stored in some type of RDBMS. With that said, one of the tasks I'll be working on is storing resumes into a database, parsing them, and populating a page. I don't think SQL Server is the correct tool for this, plus it gives me the opportunity of learning other types of storage. The job is very similar to glassdoor's resume upload, in the sense that once a user uploads resume, the document is parsed, and then the fields in a webpage are populated with the information in the resume. What data store do you recommend for this type of storage? submitted by /u/East_Sentence_4245 [link] [comments]
- Beginner Questionby /u/blind-octopus (Database) on January 16, 2026 at 2:33 pm
When performing CRUD operations from the server to a database, how do I know what I need to worry about in terms of data integrity? So suppose I have multiple servers that rely on the same postgres DB. Am I supposed to be writing server code that will protect the DB? If two servers access the DB at the same time, one is updating a record that the other is reading, is this something I can expect postgres to automatically know how to deal with safely, or do I need to write code that locks DB access for modifications to only one request? While multiple reads can happen in parallel, that should be fine. I don't expect an answer that covers everything, maybe an idea of where to find the answer to this stuff. What does server code need to account for when running in parallel and accessing the same DB? submitted by /u/blind-octopus [link] [comments]
- From Building Houses to Storage Enginesby /u/diagraphic (Database) on January 16, 2026 at 8:09 am
submitted by /u/diagraphic [link] [comments]
- MariaDB on XAMP not working anymoreby /u/Duckmastermind1 (Database) on January 15, 2026 at 9:37 pm
Hey, so my MariaDB suddenly stopped working, I thought not a big deal, export the current content using MySQL dump, but tbh, MariaDB isn't impressed with that, staying loading until I cancel. Any idea how to fix corrupted tables or extract my data? Also a better option then XAMP is also welcome submitted by /u/Duckmastermind1 [link] [comments]
- What is best System Design Course available on the internet with proper roadmap for absolute beginner ?by /u/Foreign_Pomelo9572 (Database) on January 15, 2026 at 7:36 pm
Hello Everyone, I am a Software Engineer with experience around 1.6 years and I have been working in the small startup where coding is the most of the task I do. I have a very good background in backend development and strong DSA knowledge but now I feel I am stuck and I am at a very comfortable position but that is absolutely killing my growth and career opportunity and for past 2 months, have been giving interviews and they are brutal at system design. We never really scaled any application rather we downscaled due to churn rate as well as. I have a very good backend development knowledge but now I need to step and move far ahead and I want to push my limits than anything. I have been looking for some system design videos on internet, mostly they are a list of videos just creating system design for any application like amazon, tik tok, instagram and what not, but I want to understand everything from very basic, I don't know when to scale the number of microservices, what AWS instance to opt for, wheather to put on EC2 or EKS, when to go for mongo and when for cassandra, what is read replica and what is quoroum and how to set that, when to use kafka, what is kafka. Please can you share your best resources which can help me understand system design from core and absolutely bulldoze the interviews. All kinds of resources, paid and unpaid, both I can go for but for best. Thanks. submitted by /u/Foreign_Pomelo9572 [link] [comments]
- Looking for feedback on my ER diagramby /u/sandmann07 (Database) on January 15, 2026 at 1:52 pm
I am learning SQL and working on a personal project. Before I go ahead and build this database, I just wanted to get some feedback on my ER diagram. Specifically, I am not sure whether the types of relations I made are accurate. But, I am definitely open to any other feedback you might have. My goal is to create a basic airlines operations database that has the ability to track passenger, airport, and airline info to build itineraries. submitted by /u/sandmann07 [link] [comments]
- Any free Postgres Provider that gives async ioby /u/ThreadStarver (Database) on January 15, 2026 at 6:54 am
Looked at neon they do give pg 18 but it isn't built with io_uring, can't truly get the benifits of async io select version(); version ----------------------------------------------------------------------------------------------------------------------- PostgreSQL 18.1 (32149dd) on aarch64-unknown-linux-gnu, compiled by gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, 64-bit (1 row) neondb=> select name, enumvals from pg_settings where name = 'io_method'; name | enumvals -----------+--------------- io_method | {sync,worker} Any provider that does that for free? submitted by /u/ThreadStarver [link] [comments]
- Is there an efficient way to send thousands to tens of thousands of select statements to PostgreSQL?by /u/paulchauwn (Database) on January 15, 2026 at 3:17 am
I'm creating an app that may require thousands to tens of thousands of select queries to be sent to a PostgreSQL database. Is there an efficient way to handle that many requests? submitted by /u/paulchauwn [link] [comments]
- How do you train “whiteboard thinking” for database interviews?by /u/Various_Candidate325 (Database) on January 14, 2026 at 1:02 pm
I've been preparing for database-related interviews (backend/data/infra role), but I keep running into the same problem: my practical database skills don't always translate well to whiteboard discussions. In my daily work, I rely heavily on context: existing architecture, real data distribution, query plans, metrics, production environment constraints, etc. I iterate and validate hypotheses repeatedly. But whiteboarding lacks all of this. In interviews, I'm asked to design architectures, explain the role of indexes, and clearly articulate trade-offs. All of this has to be done from memory in a few minutes, with someone watching. I'm not very good at "thinking out loud," my thought process seems to take longer than average, and I speak relatively slowly... I get even more nervous and sometimes stutter when an interviewer is watching me. I've tried many methods to improve this "whiteboard thinking" ability. For example, redesigning previous architectures from scratch without looking at notes; practicing explaining design choices verbally; and using IQB interview questions to simulate the types of questions interviewers actually ask. Sometimes I use Beyz coding assistant and practice mock interviews with friends over Zoom to test the coherence of my reasoning when expressed verbally. I also try to avoid using any tools, forcing myself to think independently, but I don't know which of these methods are truly helpful for long-term improvement. How can I quickly improve my whiteboard thinking skills in a short amount of time? Any advice would be greatly appreciated! TIA! submitted by /u/Various_Candidate325 [link] [comments]
- Best practice for creating a test database from production in Azure PostgreSQL?by /u/Additional-Skirt-937 (Database) on January 14, 2026 at 12:51 am
Hi Everyone, We’re planning a new infrastructure rehaul in our organization. The idea is: A Production database in a Production VNet A separate Testing VNet with a Test DB server When new code is pushed to the test environment, a test database is created from production data I’m leaning toward using Azure’s managed database restore from backup to create the test database. However, our sysadmin suggests manually dumping the production database (pg_dump) and restoring it into the test DB using scripts as part of the deployment. For those who’ve done this in Azure: Which approach is considered best practice? Is managed restore suitable for code-driven test deployments, or is pg_dump more common? Any real-world pros/cons? Would appreciate hearing how others handle this. Thanks! submitted by /u/Additional-Skirt-937 [link] [comments]
- A little problemby /u/Comfortable_Fly_6372 (Database) on January 13, 2026 at 8:39 pm
I’m having a bit of a problem with my website. I sent it off of digital products and the problem is that I have roughly around over 1 million files to upload to the site. The problem is not with the amount of storage but with the sheer number of files from my hosting plan I’m only allowed 700,000 files and unfortunately that will not be enough. I’m using C panel. and they were unsure what to do. I need the solution for this. They need at least 100 GB. Any suggestions anyone? For context these are zip files and video files. submitted by /u/Comfortable_Fly_6372 [link] [comments]
- ERP customizations - when is it time to stop adding features?by /u/rennan (Database) on January 13, 2026 at 2:34 pm
Our company's ERP system started with a few basic (but important) customizations, but over time each department has added new features based on what they need. And that makes sense because at first, we 100% needed to improve workflows, but now I'm seeing more and more bugs and slowdowns. The problem is, the more we customize, the harder it becomes to maintain. And whenever we need a really important big upgrade, it's kind of like building on top of crap.. So how can you tell when there's been too much customization? How do you not let it turn into technical debt? I need to understand this "add more features" VS clean up what you have thing, and whether or not we need to bring someone in to help, since we're thinking we can get Leverage Tech for ERP but we don't want to pay for a full new system (yet). submitted by /u/rennan [link] [comments]

































96DRHDRA9J7GTN6