DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)
Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com
AI Jobs and Career
I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
- Full Stack Engineer [$150K-$220K]
- Software Engineer, Tooling & AI Workflow, Contract [$90/hour]
- DevOps Engineer, India, Contract [$90/hour]
- More AI Jobs Opportunitieshere
| Job Title | Status | Pay |
|---|---|---|
| Full-Stack Engineer | Strong match, Full-time | $150K - $220K / year |
| Developer Experience and Productivity Engineer | Pre-qualified, Full-time | $160K - $300K / year |
| Software Engineer - Tooling & AI Workflows (Contract) | Contract | $90 / hour |
| DevOps Engineer (India) | Full-time | $20K - $50K / year |
| Senior Full-Stack Engineer | Full-time | $2.8K - $4K / week |
| Enterprise IT & Cloud Domain Expert - India | Contract | $20 - $30 / hour |
| Senior Software Engineer | Contract | $100 - $200 / hour |
| Senior Software Engineer | Pre-qualified, Full-time | $150K - $300K / year |
| Senior Full-Stack Engineer: Latin America | Full-time | $1.6K - $2.1K / week |
| Software Engineering Expert | Contract | $50 - $150 / hour |
| Generalist Video Annotators | Contract | $45 / hour |
| Generalist Writing Expert | Contract | $45 / hour |
| Editors, Fact Checkers, & Data Quality Reviewers | Contract | $50 - $60 / hour |
| Multilingual Expert | Contract | $54 / hour |
| Mathematics Expert (PhD) | Contract | $60 - $80 / hour |
| Software Engineer - India | Contract | $20 - $45 / hour |
| Physics Expert (PhD) | Contract | $60 - $80 / hour |
| Finance Expert | Contract | $150 / hour |
| Designers | Contract | $50 - $70 / hour |
| Chemistry Expert (PhD) | Contract | $60 - $80 / hour |
How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.
Second, pagination is generally a function of the front-end and/or middleware, not the database layer.
But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.
That may not be the most efficient or effective implementation, though.

So how do you propose pagination should be done?
On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.
Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Find Your AI Dream Job on Mercor
Your next big opportunity in AI could be just a click away!
What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?
I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.
If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.
With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)
It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.
AI Jobs and Career
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.
I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.
But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?
If it does the full fetch every time, then it seems quite inefficient.
If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?
something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.
From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.
And some theory – see this answer to know what is purpose of Pagination pattern
- Real-World Case: Managing LOB Data at Scale in SQL Server: From Filegroups to Partitioningby Rafael Rampineli (Database on Medium) on January 15, 2026 at 2:43 pm
LOB data rarely causes problems on day one. The real damage happens quietly, over years, until storage, backups, and maintenance collapse…Continue reading on Medium »
- Best Practices for Reloading Historical Healthcare Databy syed Jaffery (Database on Medium) on January 15, 2026 at 2:10 pm
Enterprise Data Warehouse DecommissioningContinue reading on Medium »
- B-Trees vs. LSM-Trees: Why Your Choice of DB is Failingby Tech&Talk (Database on Medium) on January 15, 2026 at 2:02 pm
You picked PostgreSQL because it’s “battle-tested.” Now your write workload is melting your disk budget.Continue reading on Medium »
- Looking for feedback on my ER diagramby /u/sandmann07 (Database) on January 15, 2026 at 1:52 pm
I am learning SQL and working on a personal project. Before I go ahead and build this database, I just wanted to get some feedback on my ER diagram. Specifically, I am not sure whether the types of relations I made are accurate. But, I am definitely open to any other feedback you might have. My goal is to create a basic airlines operations database that has the ability to track passenger, airport, and airline info to build itineraries. submitted by /u/sandmann07 [link] [comments]
- ⚙️ Optimization Techniquesby Rajesh Dixit (Database on Medium) on January 15, 2026 at 1:06 pm
Optimizing a database requires a combination of indexing strategies, memory tuning, query optimization, and periodic maintenance.Continue reading on Medium »
- Master AWS DynamoDB — Interview Preparation Questions and Answersby Cowin_129 (Database on Medium) on January 15, 2026 at 1:03 pm
All you needContinue reading on Medium »
- Top 10 SQL Queries Every Data Analyst Must Knowby Anurodh Kumar (Database on Medium) on January 15, 2026 at 12:32 pm
PowerBI Course at Rs 99Continue reading on PowerBI + Microsoft Fabric »
- If You Can Answer These DB Questions, You’re Already Ahead of 90%by Lets Learn Now (Database on Medium) on January 15, 2026 at 12:07 pm
PART-1Continue reading on Medium »
- Designing data Intensive Applications book revised : Part-IIIby Tusharsingh Baghel (Database on Medium) on January 15, 2026 at 12:04 pm
Covers Derived data concepts from the DDIA bookContinue reading on Medium »
- The Complete Authoritative Guide to SQL Server Replication Architectureby CData Software (Database on Medium) on January 15, 2026 at 12:02 pm
Organizations that master SQL Server replication gain a decisive edge in distributing data securely while building AI-ready pipelines that…Continue reading on Medium »
- Spring Cursor Paging… going for 1.0 Releaseby Peter Vigier (Database on Medium) on January 15, 2026 at 11:45 am
Since my last post there is more than 1 year passed, improving and hardening the APIs, fixing bugs. Now as the 1.0 of release of the…Continue reading on Medium »
- My understanding of XTDB (Immutable Databases)by /u/erjngreigf (Database) on January 15, 2026 at 10:29 am
submitted by /u/erjngreigf [link] [comments]
- Any free Postgres Provider that gives async ioby /u/ThreadStarver (Database) on January 15, 2026 at 6:54 am
Looked at neon they do give pg 18 but it isn't built with io_uring, can't truly get the benifits of async io select version(); version ----------------------------------------------------------------------------------------------------------------------- PostgreSQL 18.1 (32149dd) on aarch64-unknown-linux-gnu, compiled by gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, 64-bit (1 row) neondb=> select name, enumvals from pg_settings where name = 'io_method'; name | enumvals -----------+--------------- io_method | {sync,worker} Any provider that does that for free? submitted by /u/ThreadStarver [link] [comments]
- Is there an efficient way to send thousands to tens of thousands of select statements to PostgreSQL?by /u/paulchauwn (Database) on January 15, 2026 at 3:17 am
I'm creating an app that may require thousands to tens of thousands of select queries to be sent to a PostgreSQL database. Is there an efficient way to handle that many requests? submitted by /u/paulchauwn [link] [comments]
- How do you train “whiteboard thinking” for database interviews?by /u/Various_Candidate325 (Database) on January 14, 2026 at 1:02 pm
I've been preparing for database-related interviews (backend/data/infra role), but I keep running into the same problem: my practical database skills don't always translate well to whiteboard discussions. In my daily work, I rely heavily on context: existing architecture, real data distribution, query plans, metrics, production environment constraints, etc. I iterate and validate hypotheses repeatedly. But whiteboarding lacks all of this. In interviews, I'm asked to design architectures, explain the role of indexes, and clearly articulate trade-offs. All of this has to be done from memory in a few minutes, with someone watching. I'm not very good at "thinking out loud," my thought process seems to take longer than average, and I speak relatively slowly... I get even more nervous and sometimes stutter when an interviewer is watching me. I've tried many methods to improve this "whiteboard thinking" ability. For example, redesigning previous architectures from scratch without looking at notes; practicing explaining design choices verbally; and using IQB interview questions to simulate the types of questions interviewers actually ask. Sometimes I use Beyz coding assistant and practice mock interviews with friends over Zoom to test the coherence of my reasoning when expressed verbally. I also try to avoid using any tools, forcing myself to think independently, but I don't know which of these methods are truly helpful for long-term improvement. How can I quickly improve my whiteboard thinking skills in a short amount of time? Any advice would be greatly appreciated! TIA! submitted by /u/Various_Candidate325 [link] [comments]
- Best practice for creating a test database from production in Azure PostgreSQL?by /u/Additional-Skirt-937 (Database) on January 14, 2026 at 12:51 am
Hi Everyone, We’re planning a new infrastructure rehaul in our organization. The idea is: A Production database in a Production VNet A separate Testing VNet with a Test DB server When new code is pushed to the test environment, a test database is created from production data I’m leaning toward using Azure’s managed database restore from backup to create the test database. However, our sysadmin suggests manually dumping the production database (pg_dump) and restoring it into the test DB using scripts as part of the deployment. For those who’ve done this in Azure: Which approach is considered best practice? Is managed restore suitable for code-driven test deployments, or is pg_dump more common? Any real-world pros/cons? Would appreciate hearing how others handle this. Thanks! submitted by /u/Additional-Skirt-937 [link] [comments]
- A little problemby /u/Comfortable_Fly_6372 (Database) on January 13, 2026 at 8:39 pm
I’m having a bit of a problem with my website. I sent it off of digital products and the problem is that I have roughly around over 1 million files to upload to the site. The problem is not with the amount of storage but with the sheer number of files from my hosting plan I’m only allowed 700,000 files and unfortunately that will not be enough. I’m using C panel. and they were unsure what to do. I need the solution for this. They need at least 100 GB. Any suggestions anyone? For context these are zip files and video files. submitted by /u/Comfortable_Fly_6372 [link] [comments]
- I am building a database which would be durable first, and would support all types of datas.by /u/AmbitiousSwan5130 (Database) on January 13, 2026 at 3:00 pm
I have built an alpha version: https://github.com/ShreyashM17/ShunyaDB of it, I would be building this in upcoming months. This would be based on Rust, It would eventually support Vector, Document, Graph, etc types of data. I am open to knowing your opinions, let me know if I should do something, in a different way. submitted by /u/AmbitiousSwan5130 [link] [comments]
- ERP customizations - when is it time to stop adding features?by /u/rennan (Database) on January 13, 2026 at 2:34 pm
Our company's ERP system started with a few basic (but important) customizations, but over time each department has added new features based on what they need. And that makes sense because at first, we 100% needed to improve workflows, but now I'm seeing more and more bugs and slowdowns. The problem is, the more we customize, the harder it becomes to maintain. And whenever we need a really important big upgrade, it's kind of like building on top of crap.. So how can you tell when there's been too much customization? How do you not let it turn into technical debt? I need to understand this "add more features" VS clean up what you have thing, and whether or not we need to bring someone in to help, since we're thinking we can get Leverage Tech for ERP but we don't want to pay for a full new system (yet). submitted by /u/rennan [link] [comments]
- Has anyone used TalkBI and is it safe to do so? Need honest reviews.by /u/gallade17 (Database) on January 13, 2026 at 11:03 am
Some backstory: My team and I built a SaaS tool that is closing to 100K MRR, growing at about 10-15% pm. We’re located in Europe and our team dynamic is rather conservative. A 5-person team from which 2 are devs. We’ve realized from past experience that small, hybrid teams work better, but marketing and product often take up dev time to pull PostgreSQL data because they don’t know SQL. We looked at tools that can simplify these database interactions by eliminating the need for SQL. Perhaps an AI tool that creates the code for you. Something that keeps things well organized and makes visual reports. The level of query complexity is not (yet) that big, so it should be doable. But data protection is essential and the most important deciding factor. After looking for a while I identified several open source options that look reliable (Vanna and chat2db) but it is painfully evident that their Github and PR is manipulated with marketing tactics. Hence, despite claiming data protection and security, I am still uncertain. Then we got a recommendation for TalkBI from a startup friend. They are using it for the same reason we want to, but it’s not open source. I noticed it’s hosted by European providers and everything is encrypted and secure. Yet the tool is quite new and unpopular compared to the other two options. TalkBI reviews are scarce. So I am looking for other teams who might’ve used TalkBI and what you think about it. More specifically around encryption standards, how data is (or could be) used by the company, and if TalkBI is safe to connect our database to. Or, if you know of a company that might’ve used them feel free to DM me their name so I can talk with their team directly. submitted by /u/gallade17 [link] [comments]
- AI chat inside a SQL editor with schema-aware assistanceby /u/ruslan_zasukhin (Database) on January 13, 2026 at 9:16 am
Hi r/Database, I’m one of the developers behind Valentina Studio, a cross-platform database tool, (Win, Linux, Mac). In our recent 16.5 release we added an AI chat directly into the SQL editor — not as a generic chatbot, but as a feature that understands the current query, schema, and referenced tables. The goal is to reduce context switching while keeping SQL execution explicit and controlled. Some design details: The experience is inspired by Copilot-style workflows, adapted for databases. AI uses your current SQL, schema, and referenced tables as context. Switch between Ask Mode and Agent Mode. Agent Mode can adjust and run SQL queries when needed. Works with OpenAI, Claude, Gemini, OpenRouter, and xAI. Supports custom instructions per provider. A practical AI assistant designed specifically for SQL work. Each SQL Editor has its own chat and context. AI has access to Python engine of Valentina Studio. What do you think? We going yet to add other information, e.g. Query Result. submitted by /u/ruslan_zasukhin [link] [comments]
- The ACID Test: Why We Think Search Needs Transactionsby /u/philippemnoel (Database) on January 13, 2026 at 3:11 am
submitted by /u/philippemnoel [link] [comments]
- The ACID Test: Why We Think Search Needs Transactionsby /u/philippemnoel (Database) on January 13, 2026 at 3:10 am
submitted by /u/philippemnoel [link] [comments]
- If you can't leave the Microsoft environment, what reasons are there for buying licenses vs using Express?by /u/Tight-Shallot2461 (Database) on January 13, 2026 at 1:16 am
I need to convince my boss to buy SQL Standard licenses. We are already using Express, but how do I make the argument to buy licenses? submitted by /u/Tight-Shallot2461 [link] [comments]
- Web based Postgres Client | Looking for some feedbackby /u/Luc_Gibson (Database) on January 12, 2026 at 5:13 pm
I've been building a Postgres database manager that is absolutely stuffed with features including: ER diagram & schema navigator Relationship explorer Database data quality auditing Simple dashboard Table skills (pivot table detection etc...) Smart data previews (URL, geo, colours etc...) I really think I've built possibly the best user experience in terms of navigating and getting the most out of your tables. Right now the app is completely standalone, it just stores everything in local storage. Would love to get some feedback on it. I haven't even given it a proper domain or name yet! Let me know what you think: https://schema-two.vercel.app/ submitted by /u/Luc_Gibson [link] [comments]
- Migrating legacy Access DB to PostgreSQL. Need a true cross-platform Frontend (Win/Mac/Linux) with Forms & Reporting.by /u/thef4f0 (Database) on January 12, 2026 at 1:48 pm
Hi everyone, In our company, we are currently migrating a legacy local MS Access database to a self-hosted PostgreSQL server (running on a dedicated rack server). Now I need a frontend solution for 3-4 users working in a mixed environment of Windows, macOS, and Linux. I am essentially looking for "Access features without the internal database engine". Here is what I need specifically: Visual Form Builder (Data Entry): I need the classic "Access User Interface" experience. Forms with buttons, input fields, dropdowns, and sub-forms to populate and manage the database efficiently. It needs to be more than just a spreadsheet view; I need actual GUI "masks" for the users. Scripting/Logic: A functional replacement for VBA to handle button actions and business logic. Visual Report Designer: This is a hard requirement. I need pixel-perfect printing/PDF generation for invoices and reports. Most modern web-builders (like Budibase, NocoDB, etc.) seem great for simple CRUD interfaces but often feel terrible for complex reporting or "dense" data entry screens. My Question: Is there a professional tool that actually covers all Access capabilities (especially the rich forms and reporting) but runs on top of Postgres and works across all OSs? Thanks! submitted by /u/thef4f0 [link] [comments]
- Is there a name for additional tables created during the first stage of normalisation?by /u/A_British_Dude (Database) on January 11, 2026 at 9:31 pm
I am new to databases and need to make one for my A-level coursework. While normalising my relational database I ended up creating many smaller tables that link the main tables and only contain the primary key of the two tables they are linked to as fields. This is to facilitate the many-to-many relations between tables. Do these tables have an actual name, I haven't been able to find one and am tired of calling them cross-reference tables every time I mention them in the written section. Any help is greatly appreciated! submitted by /u/A_British_Dude [link] [comments]
- PostgreSQL user here—what database is everyone else using?by /u/Automatic-Step-9756 (Database) on January 11, 2026 at 9:02 pm
Working on a backend project and went with PostgreSQL. It's been solid, but I'm always curious what others in the community prefer. - What are you using and why? submitted by /u/Automatic-Step-9756 [link] [comments]
- Stop using MySQL in 2026, it is not true open sourceby /u/OttoKekalainen (Database) on January 11, 2026 at 6:41 pm
submitted by /u/OttoKekalainen [link] [comments]
- Sophisticated Simplicity of Modern SQLiteby /u/shivekkhurana (Database) on January 11, 2026 at 2:47 pm
submitted by /u/shivekkhurana [link] [comments]
- Vacuuming in PostgreSQLby /u/HyperNoms (Database) on January 11, 2026 at 2:30 pm
Hello guys, I want to understand the concept of the wraparound in transaction ID and the frozen rows what happens exactly in it. I keep getting lost. submitted by /u/HyperNoms [link] [comments]
- TidesDB 7.1.1 vs RocksDB 10.9.1 Performance Benchmarksby /u/diagraphic (Database) on January 10, 2026 at 1:10 pm
submitted by /u/diagraphic [link] [comments]
- I'm looking to start with a low-code db system for a new webapp. Is Supabase all there is?by /u/twitter_is_DEAD (Database) on January 9, 2026 at 10:42 pm
I have some experience with Supabase and they're kinda everywhere. The hipster in my spirit wants to try something new and lesser-known. Does anyone have any good recommendations that aren't either completely code and/or paired with a vibecode/lowcode frontend builders (like lovable or bubble)? Headless database tools ig? Edit: postgress with vector db?? submitted by /u/twitter_is_DEAD [link] [comments]
- TidesDB 7 vs RocksDB 10 Under Sync Modeby /u/diagraphic (Database) on January 9, 2026 at 5:01 pm
submitted by /u/diagraphic [link] [comments]
- we need to stop worrying about INFINITE SCALE for databases that haven't even hit 1gb yetby /u/daniel_odiase (Database) on January 9, 2026 at 9:18 am
it feels like every time i start a project, people want to talk about distributed systems, global scaling, and no-sql flexibility before we even have enough rows to fill an excel sheet. it is a total trap. we spend weeks setting up these complex, "future-proof" clusters that are a nightmare to query and even harder to back up. we are basically building a rocket ship to go to the grocery store. meanwhile, a simple, "boring" postgres or mysql setup on a single server could handle our entire workload with 90% less stress and a much smaller bill. submitted by /u/daniel_odiase [link] [comments]






































96DRHDRA9J7GTN6