DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)
Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com
AI Jobs and Career
I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
- Full Stack Engineer [$150K-$220K]
- Software Engineer, Tooling & AI Workflow, Contract [$90/hour]
- DevOps Engineer, India, Contract [$90/hour]
- More AI Jobs Opportunitieshere
| Job Title | Status | Pay |
|---|---|---|
| Full-Stack Engineer | Strong match, Full-time | $150K - $220K / year |
| Developer Experience and Productivity Engineer | Pre-qualified, Full-time | $160K - $300K / year |
| Software Engineer - Tooling & AI Workflows (Contract) | Contract | $90 / hour |
| DevOps Engineer (India) | Full-time | $20K - $50K / year |
| Senior Full-Stack Engineer | Full-time | $2.8K - $4K / week |
| Enterprise IT & Cloud Domain Expert - India | Contract | $20 - $30 / hour |
| Senior Software Engineer | Contract | $100 - $200 / hour |
| Senior Software Engineer | Pre-qualified, Full-time | $150K - $300K / year |
| Senior Full-Stack Engineer: Latin America | Full-time | $1.6K - $2.1K / week |
| Software Engineering Expert | Contract | $50 - $150 / hour |
| Generalist Video Annotators | Contract | $45 / hour |
| Generalist Writing Expert | Contract | $45 / hour |
| Editors, Fact Checkers, & Data Quality Reviewers | Contract | $50 - $60 / hour |
| Multilingual Expert | Contract | $54 / hour |
| Mathematics Expert (PhD) | Contract | $60 - $80 / hour |
| Software Engineer - India | Contract | $20 - $45 / hour |
| Physics Expert (PhD) | Contract | $60 - $80 / hour |
| Finance Expert | Contract | $150 / hour |
| Designers | Contract | $50 - $70 / hour |
| Chemistry Expert (PhD) | Contract | $60 - $80 / hour |
How do you make a Python loop faster?
Programmers are always looking for ways to make their code more efficient. One way to do this is to use a faster loop. Python is a high-level programming language that is widely used by developers and software engineers. It is known for its readability and ease of use. However, one downside of Python is that its loops can be slow. This can be a problem when you need to process large amounts of data. There are several ways to make Python loops faster. One way is to use a faster looping construct, such as C. Another way is to use an optimized library, such as NumPy. Finally, you can vectorize your code, which means converting it into a format that can be run on a GPU or other parallel computing platform. By using these techniques, you can significantly speed up your Python code.
According to Vladislav Zorov, If not talking about NumPy or something, try to use list comprehension expressions where possible. Those are handled by the C code of the Python interpreter, instead of looping in Python. Basically same idea like the NumPy solution, you just don’t want code running in Python.
Example: (Python 3.0)

Python list traversing tip:
Instead of this: for i in range(len(l)): x = l[i]
Use this for i, x in enumerate(l): …
TO keep track of indices and values inside a loop.
Twice faster, and the code looks better.
Finally, developers can also improve the performance of their code by making use of caching. By caching values that are computed inside a loop, programmers can avoid having to recalculate them each time through the loop. By taking these steps, programmers can make their Python code more efficient and faster.
Very Important: Don’t worry about code efficiency until you find yourself needing to worry about code efficiency.
The place where you think about efficiency is within the logic of your implementations.
This is where “big O” discussions come in to play. If you aren’t familiar, here is a link on the topic
What are the top 10 Wonders of computing and software engineering?

Do you want to learn python we found 5 online coding courses for beginners?
Python Coding Bestsellers on Amazon
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Find Your AI Dream Job on Mercor
Your next big opportunity in AI could be just a click away!
https://amzn.to/3s3KXc3
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
The Best Python Coding and Programming Bootcamps
We’ve also included a scholarship resource with more than 40 unique scholarships to provide additional financial support.
Python Coding Breaking News
- diwire: dependency injection that auto-wires from type hints (zero runtime deps)by /u/zayatsdev (Python) on February 15, 2026 at 6:18 pm
What My Project Does Full disclosure: I'm the author/maintainer of diwire It’s a Python 3.10+ dependency injection container that builds your object graph from type hints, so you can often skip registration boilerplate entirely. Repo: https://github.com/maksimzayats/diwire Docs: https://docs.diwire.dev Here's the "auto-wire a graph" baseline: from dataclasses import dataclass, field from diwire import Container @dataclass class Database: host: str = field(default="localhost", init=False) @dataclass class UserRepository: db: Database @dataclass class UserService: repo: UserRepository container = Container() service = container.resolve(UserService) print(service.repo.db.host) # localhost Comparison Many DI libraries/framework patterns start by making you register everything up front; diwire flips the default: auto-wire concretes from annotations, then opt into explicit registrations where you care about boundaries. If you prefer "no implicit wiring", diwire also supports a strict mode so missing deps fail fast and you stay fully explicit. Performance-wise: in my benchmark suite (strict mode, CPython 3.14.2 on an M1 Pro), speedups ranged up to 2.5x vs rodi, 7.6x vs dishka, 3.3x vs wireup (resolve-only runs also include punq). Repro: make benchmark-report and make benchmark-report-resolve. If this matches your use case, a GitHub star really helps visibility and motivates continued work! Questions I'd love feedback on Do you like auto-wiring-by-default, or do you strongly prefer explicit registration? Why? What's the first thing that would block adoption for you? In your codebase, where has DI been most helpful or most painful? submitted by /u/zayatsdev [link] [comments]
- Mesa 3.5.0: Agent-based modeling, now with discrete-event schedulingby /u/Balance- (Python) on February 15, 2026 at 6:12 pm
Hi everyone! We just released Mesa 3.5.0, a major feature release of our agent-based modeling Python library. I'm quite proud of this one, because you can now combine traditional agent-based modeling with discrete-event scheduling in a single framework. Release: https://github.com/mesa/mesa/releases/tag/v3.5.0 Docs: https://mesa.readthedocs.io What's Agent-Based Modeling? Ever wondered how bird flocks organize themselves? Or how traffic jams form? Agent-based modeling (ABM) lets you simulate these complex systems by defining simple rules for individual "agents" (birds, cars, people, etc.) and watching how they interact. Instead of writing equations for the whole system, you model each agent's behavior and let patterns emerge naturally. It's used to study everything from epidemic spread to market dynamics to ecological systems. What's Mesa? Mesa is a Python library for building, analyzing, and visualizing agent-based models. It builds on the scientific Python stack (NumPy, pandas, Matplotlib) and provides specialized tools for spatial relationships, agent management, data collection, and interactive visualization. What's New in 3.5.0? Event scheduling and time advancement Until now, Mesa models ran in lockstep: every agent acts, that's one step, repeat. That works great for many models, but real-world systems often have things happening at different timescales: an ecosystem might have daily foraging, seasonal migration, and yearly reproduction cycles all interacting. Mesa 3.5 lets you schedule events at specific times and mix them freely with traditional step-based logic: ```python The familiar step-based approach still works (currently) model.step() But now you can also think in terms of time model.run_for(10) # Advance 10 time units model.run_until(50.0) # Run until a specific time Schedule things to happen at specific moments model.schedule_event(spawn_food, at=25.0) model.schedule_event(migrate, after=5.0) Or set up recurring events from mesa.time import Schedule model.schedule_recurring(reproduce, Schedule(interval=30, start=0)) model.schedule_recurring(seasonal_change, Schedule(interval=90, end=365)) ``` This opens up a whole class of models that were difficult to build before: epidemics with incubation periods, ecosystems with seasonal dynamics, supply chains, social networks with asynchronous interactions, or any system where different things happen on different schedules. And for traditional ABMs, everything works exactly as before. The event system (previously experimental) is now stable and lives in mesa.time. Create agents from DataFrames If your agent data lives in a CSV or database, you can now skip the boilerplate and create agents directly from a pandas DataFrame: python df = pd.read_csv("population.csv") # columns: age, income, location agents = Person.from_dataframe(model, df) Each row becomes an agent, with columns mapped to constructor arguments. Handy for initializing models from census data, survey results, or any tabular dataset. Experimental highlights Some exciting features in active development: - Scenarios: Define computational experiments separately from model logic. Swap parameter sets without touching your model code, with full visualization support - Reactive data collection: A new event-driven DataRecorder that can write to memory, SQLite, Parquet, or JSON. Collect different metrics at different intervals - Meta-agents: Improved support for hierarchical structures (departments within organizations, persons within households, organs within organisms) These are experimental and may change between releases, but they're shaping up nicely. Preparing for Mesa 4.0 We're deprecating several legacy patterns (all still work, just with warnings): - seed parameter → use rng instead - AgentSet indexing → use to_list() for list operations - Portrayal dictionaries → use AgentPortrayalStyle - Experimental Simulator classes → use the new Model methods above See the migration guide for details. Get started pip install --upgrade mesa New to Mesa? Check out the tutorials. We have new ones specifically on agent activation and event scheduling. Upgrading? The migration guide has you covered. Nothing breaks in this release, but we're announcing some removals for 4.0. This release was possible thanks to 29 contributors, of which 5 new ones. Thanks to everyone involved! Questions or feedback? Join us on GitHub Discussions or Matrix Chat. submitted by /u/Balance- [link] [comments]
- defusedxml or lxml for parsing xml files?by /u/AffectWizard0909 (Python) on February 15, 2026 at 6:07 pm
Hello! I was wondering if using either lxml or defusedxml would be good to use when parsing/reading external xml files? I have heard that defusedxml is more robust against standard xml attacks (XXE etc). I was kind of then leaning towards defusedxml, but wanted to know if lxml also have the same security solutions, or why I may want to consider lxml over defusedxml? submitted by /u/AffectWizard0909 [link] [comments]
- Robyn(web framework) introduces @app.websocket decorator syntaxby /u/stealthanthrax (Python) on February 15, 2026 at 3:17 pm
For the unaware - Robyn is a fast, async Python web framework built on a Rust runtime. We're introducing a new @app.websocket decorator syntax for WebSocket handlers. It's a much cleaner DX compared to the older class-based approach, and we'll be deprecating the old syntax soon. This is also groundwork for upcoming Pydantic integration. Wanted to share it with folks outside the Robyn Discord. You can check out the release at - https://github.com/sparckles/Robyn/releases/tag/v0.78.0 Let me know if you have any questions/suggestions 😀 submitted by /u/stealthanthrax [link] [comments]
- Benchmarks: Kreuzberg, Apache Tika, Docling, Unstructured.io, PDFPlumber, MinerU and MuPDF4LLMby /u/Goldziher (Python) on February 15, 2026 at 10:21 am
Hi all, We finished a bunch of benchmarks of Kreuzberg and other major open source tools in the text-extraction / document-intelligence space. This was very important for us because we practice TDD -> Truth Driven Development, and establishing the baseline is essential. Methodology Kreuzberg includes a benchmark harness built in Rust (you can see it in the repo under the /tools folder), and the benchmarks run in GitHub Actions CI on Linux runners (see .github/workflows/benchmarks.yaml). The goal is to compare extractors on the same inputs with the same measurement approach. How we keep comparisons fair: Same fixture set for every tool, and tools only run on file types they claim to support (no forced unsupported conversions). Same iteration count and timeouts per document. Two modes: single-file (one document at a time) to compare latency, and batch (limited concurrency) to compare throughput-oriented behavior. What we report: p50/p95/p99 across documents for duration, extraction duration (when available), throughput, memory, and success rate. Optional quality scoring compares extracted text to ground truth. CI consolidation: Some tools are sharded across multiple CI jobs; results are consolidated into one aggregated report for this run. Benchmark Results Data: 15,288 extractions across 56 file types; 3 measured iterations per doc (plus warmup). How these are computed: for each tool+mode, we compute percentiles per file type and then take a simple average across the file types the tool actually ran. These are suite averages, not a single-format benchmark. Single-file: Latency Tool Picked Types Success Duration p50/p95/p99 (ms) Extraction p50/p95/p99 (ms) kreuzberg kreuzberg-rust:single 56/56 99.13% (567/572) 1.11/7.35/24.73 1.11/7.35/24.73 tika tika:single 45/56 96.19% (530/551) 9.31/39.76/63.22 10.14/46.21/74.42 pandoc pandoc:single 17/56 92.34% (229/248) 40.07/88.22/99.03 38.68/96.22/109.43 pymupdf4llm pymupdf4llm:single 9/56 74.02% (94/127) 79.89/1240.17/7586.50 705.37/11146.92/68258.02 markitdown markitdown:single 13/56 96.26% (309/321) 128.42/420.52/1385.22 114.43/404.08/1365.25 pdfplumber pdfplumber:single 1/56 96.84% (92/95) 145.95/3643.88/44101.65 138.87/3620.72/43984.61 unstructured unstructured:single 25/56 94.88% (389/410) 3391.13/9441.15/11588.30 3496.32/9792.28/12028.43 docling docling:single 13/56 96.07% (293/305) 14323.02/21083.52/25565.68 14277.51/21035.61/25515.57 mineru mineru:single 3/56 76.47% (78/102) 33608.01/57333.52/63427.67 33603.57/57329.21/63423.63 Single-file: Throughput Tool Picked Throughput p50/p95/p99 (MB/s) kreuzberg kreuzberg-rust:single 127.36/225.99/246.72 tika tika:single 2.55/13.69/17.03 pandoc pandoc:single 0.16/19.45/22.26 pymupdf4llm pymupdf4llm:single 0.01/0.11/0.21 markitdown markitdown:single 0.17/25.18/31.25 pdfplumber pdfplumber:single 0.67/10.74/16.95 unstructured unstructured:single 0.02/0.66/0.79 docling docling:single 0.10/0.72/0.92 mineru mineru:single 0.00/0.01/0.02 Single-file: Memory Tool Picked Memory p50/p95/p99 (MB) kreuzberg kreuzberg-rust:single 1191/1205/1244 tika tika:single 13473/15040/15135 pandoc pandoc:single 318/461/477 pymupdf4llm pymupdf4llm:single 239/255/262 markitdown markitdown:single 1253/1369/1427 pdfplumber pdfplumber:single 671/854/2227 unstructured unstructured:single 8975/11756/12084 docling docling:single 32857/38653/39844 mineru mineru:single 92769/108367/110157 Batch: Latency Tool Picked Types Success Duration p50/p95/p99 (ms) Extraction p50/p95/p99 (ms) kreuzberg kreuzberg-php:batch 49/56 99.11% (555/560) 1.48/9.07/28.41 1.23/8.46/27.71 tika tika:batch 45/56 96.19% (530/551) 9.77/39.51/63.24 10.32/45.61/74.43 pandoc pandoc:batch 17/56 92.34% (229/248) 39.55/87.65/98.38 38.08/95.73/108.61 pymupdf4llm pymupdf4llm:batch 9/56 73.23% (93/127) 79.41/1156.12/2191.20 700.64/10390.92/19702.30 markitdown markitdown:batch 13/56 96.26% (309/321) 128.42/428.52/1399.76 114.16/412.33/1380.23 pdfplumber pdfplumber:batch 1/56 96.84% (92/95) 144.55/3638.77/43841.47 138.04/3615.70/43726.91 unstructured unstructured:batch 25/56 94.88% (389/410) 3417.19/9687.10/11835.26 3523.92/10047.87/12285.54 docling docling:batch 13/56 96.39% (294/305) 12911.97/19893.93/24258.61 12872.82/19849.65/24212.54 mineru mineru:batch 3/56 76.47% (78/102) 36708.82/66747.74/73825.28 36703.28/66743.33/73820.78 Batch: Throughput Tool Picked Throughput p50/p95/p99 (MB/s) kreuzberg kreuzberg-php:batch 69.45/167.41/188.63 tika tika:batch 2.34/13.89/16.73 pandoc pandoc:batch 0.16/20.97/24.00 pymupdf4llm pymupdf4llm:batch 0.01/0.11/0.21 markitdown markitdown:batch 0.17/25.12/31.26 pdfplumber pdfplumber:batch 0.67/11.05/17.73 unstructured unstructured:batch 0.02/0.68/0.81 docling docling:batch 0.11/0.73/0.96 mineru mineru:batch 0.00/0.01/0.02 Batch: Memory Tool Picked Memory p50/p95/p99 (MB) kreuzberg kreuzberg-php:batch 2224/2269/2324 tika tika:batch 13661/16772/16946 pandoc pandoc:batch 320/463/479 pymupdf4llm pymupdf4llm:batch 241/259/273 markitdown markitdown:batch 1256/1380/1434 pdfplumber pdfplumber:batch 649/832/2205 unstructured unstructured:batch 8958/11751/12065 docling docling:batch 32966/38823/40536 mineru mineru:batch 105619/118966/120810 Notes: - CPU is measured by the harness, but it is not included in this aggregated report. - Throughput is computed as file_size / effective_duration (uses tool-reported extraction time when available). If a slice has no valid positive throughput samples after filtering, it can drag the suite average toward 0. - Memory comes from process-tree RSS sampling (parent plus children) and is summed across that tree; shared pages across processes can make values look larger than 'real' RAM. - Batch memory numbers are not directly comparable to single-file peak RSS: in batch mode the harness amortizes process memory across files in the batch by file-size fraction. - All tools except MuPDF4LLM are permissive OSS. MuPDF4LLM is AGPL, and Unstructured.io had (has?) some AGPL dependencies, which might make it problematic. submitted by /u/Goldziher [link] [comments]
- my siamese nn that attempts to solve graph isomorphismby /u/xerohawkxd (Python) on February 15, 2026 at 10:12 am
https://github.com/samarvir1/SiameseNN-Graph-Isomorphism what it does: it is a Siamese Graph Neural Network, utilizing specifically the Graph Isomorphism Network layer, to learn permutation-invariant graph embeddings to solve graph isomorphism. it includes t-sne visualization. target audience: cheminformatics researchers my goal was to train a model which can determine if two graphs are isomorphic. i made this roughly 2 months ago, during my winter break, and ive only since the past two weeks started to be active on reddit so i decided to share it now. so what are your thoughts? submitted by /u/xerohawkxd [link] [comments]
- LazyLib – Automatically create a venv + install missing dependencies before running a Python scriptby /u/snoopxz (Python) on February 15, 2026 at 9:03 am
## What My Project Does LazyLib is a small CLI tool that makes running Python scripts easier by automatically creating/using a virtual environment and installing missing dependencies based on the script’s imports. ## Target Audience People who often run small Python scripts (personal scripts, quick experiments, downloaded scripts) and want to avoid manually setting up a venv and installing packages every time. ## Comparison Unlike manually running `python -m venv` + `pip install ...`, LazyLib tries to detect required dependencies automatically (via AST import parsing) and installs what’s missing before executing the script. ## GitHub https://github.com/snoopzx/lazylib ## Feedback I’d love feedback on the approach (especially security/reproducibility concerns), and suggestions for improvements. submitted by /u/snoopxz [link] [comments]
- GoPDFSuit – A JSON-based PDF engine with drag-and-drop layouts. Should I use LaTeX or Typst?by /u/chinmay06 (Python) on February 15, 2026 at 8:02 am
Hey r/Python, I’ve been working on GoPDFSuit, a library designed to move away from the "HTML-to-PDF" struggle by using a strictly JSON-based schema for document generation. The goal is to allow developers to build complex PDF layouts using structured data they already have, paired with a drag-and-drop UI for adjusting component widths and table structures. The Architecture Schema: Pure JSON (No need to learn a specific templating language like Jinja2 or Mako). Layout: Supports dynamic draggable widths for tables and nested components. Current State: Fully functional for business reports, invoices, and data sheets. Technical Challenge: Math Implementation I’m currently at a crossroads for implementing mathematical formula rendering within the JSON strings. Since this is built for a Python-friendly ecosystem, I’m weighing two options: LaTeX: The "Gold Standard." Huge ecosystem, but might be overkill and clunky to escape properly inside JSON strings. Typst: The modern alternative. It’s faster, has a much cleaner syntax, and is arguably easier for developers to write by hand. For those of you handling document automation in Python, which would you rather see integrated? I’m also curious if you see "JSON-as-a-Layout-Engine" as a viable alternative to the standard Headless Chrome/Playwright approaches for high-performance PDF generation. In case if you want to check the json template demo Demo Link - https://chinmay-sawant.github.io/gopdfsuit/#/editor Documentation - https://chinmay-sawant.github.io/gopdfsuit/#/documentation It also has native python bindings or calling via the API endpoints for the templates. submitted by /u/chinmay06 [link] [comments]
- HRA exemption calculator for Indian studentsby /u/Motor-Sentence582 (Python) on February 15, 2026 at 7:21 am
Namaste Python community! 🙏 I'm a 52 year old accounting teacher from Kerala, India. After 30 years of teaching, I learned Python and created my first real project! **What it does:** HRA (House Rent Allowance) exemption calculator following Indian Income Tax Act Section 10(13A) **Features:** ✅ Python CLI version ✅ Web version (HTML/JS) - no installation needed ✅ Handles all tax rules correctly ✅ Free for all students. Please check it and give me your feedback. I will improve it as per your needs. Thank you ✨ Made this with love for B.Com/MBA students but anyone can use it! https://github.com/rainytech/hra-calculator submitted by /u/Motor-Sentence582 [link] [comments]
- ez-optimize: use scipy.optimize with keywords, eg x0={'x': 1, 'y': 2}, and other QoL improvementsby /u/qthedoc (Python) on February 15, 2026 at 12:38 am
https://github.com/qthedoc/ez-optimize What My Project Does: Hey r/Python! I built ez-optimize, a more intuitive front-end for scipy.optimize that simplifies optimization with features like: - keyword-based parameter definitions (e.g., x0={'x': 1, 'y': 2}) - easy switching between minimization and maximization (direction='max') Target Audience: Engineers, Scientists, ML researches, anyone needed quick analysis and optimization. Comparison: Keyword-Based Optimization (e.g.: x0={'x': 1, 'y': 2}) By default, optimization uses arrays x0=[1, 2]. However sometimes it's more intuitive to use named parameters x0={'x': 1, 'y': 2}. ez-optimize allows you to define parameters as dictionaries. Then under the hood, ez-optimize automatically flattens parameters (and wraps your function) for SciPy while restoring the original structure in results. Keyword-based optimization is especially useful in physical simulations where parameters have meaningful names representing physical quantities. Switch to Maximize with direction='max' By default, optimization minimizes the objective function. To maximize, you typically need to write a negated version of your function. With ez-optimize, simply set direction='max' and the library will automatically negate your function under the hood. Example: Minimizing with Keyword-Based Parameters ```python from ez_optimize import minimize def rosenbrock(x, y, a=1, b=100): return (a - x)2 + b * (y - x2)**2 x0 = {'x': 1.3, 'y': 0.7} result = minimize(rosenbrock, x0, method='trust-constr') print(f"Optimal x: {result.x}") print(f"Optimal value: {result.fun}") Optimal x: {'x': 1.0, 'y': 1.0} Optimal value: 0.0 ``` submitted by /u/qthedoc [link] [comments]
- Sunday Daily Thread: What's everyone working on this week?by /u/AutoModerator (Python) on February 15, 2026 at 12:00 am
Weekly Thread: What's Everyone Working On This Week? 🛠️ Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to! How it Works: Show & Tell: Share your current projects, completed works, or future ideas. Discuss: Get feedback, find collaborators, or just chat about your project. Inspire: Your project might inspire someone else, just as you might get inspired here. Guidelines: Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome. Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here. Example Shares: Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate! Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better. Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier! Let's build and grow together! Share your journey and learn from others. Happy coding! 🌟 submitted by /u/AutoModerator [link] [comments]
- Skylos: Dead code and vulnerabilities detection (Update with updated benchmarks)by /u/papersashimi (Python) on February 14, 2026 at 11:45 pm
Hey! I was here a week back, we have released new updates for v3.3.0. We recently released a MCP server, and a CICD agent (A video tutorial for this will be coming soon). For the ones who missed the previous post, I just wanted to update that we have created an updated benchmark against vulture(includes both static as well as hybrid benchmarks). For the uninitiated, is a local first SAST tool for Python codebases. If you've already read this skip to the bottom where the benchmark link is. What my project does Skylos does the following stuff below. We offer static and dynamic analysis: dead code (unused functions/classes/imports. The cli will display confidence scoring) security patterns (taint-flow style checks, secrets, hallucination etc. We have also expanded the list to include MCP security vulnerabilities) quality checks (complexity, nesting, function size, etc.) pytest hygiene (unused pytest.fixtures etc.) agentic framework (uses a hybrid of static + agent analysis to reduce false positives) --trace to catch dynamic code Skylos v3.3.0 also has MCP server. Skylos exposes its analysis capabilities as an MCP (Model Context Protocol) server, allowing AI assistants like Claude Desktop to scan your codebase directly. Instructions on how to set it up can be found in the repo's README The Benchmark: Skylos vs. Vulture We created a realistic FastAPI-style repo with intentional dead code and tricky dynamic patterns (like `getattr()`, `globals()`, and `__init_subclass__`). Here is the summary of our results (Confidence=10): Configuration Precision Recall False Positives Speed Vulture 38.5% 75.8% 14 0.1s Skylos (Static) 52.5% 93.9% 13 1.8s Skylos (Hybrid) 67.4% 93.9% 2 ~400s Hybrid Mode eliminates noise: We saw an 84.6% reduction in False Positives (dropping from 13 to 2). The Trade-off: Accuracy costs time. Hybrid mode is significantly slower because it verifies "zero-reference" findings with an LLM layer. It's not for your pre-commit hook, but it's better for a deep-clean report. Quick start (how to use) This is a demo video here: https://www.youtube.com/watch?v=BjMdSP2zZl8 Install: pip install skylos Run a basic scan (which is essentially just dead code): skylos . Run sec + secrets + quality: skylos . --secrets --danger --quality --secrets Uses runtime tracing to reduce dynamic FPs: skylos . --trace Gate your repo in CI: skylos . --danger --gate --strict To use skylos.dev and upload a report. You will be prompted for an api key etc. skylos . --danger --upload To use agents: skylos agent analyze . Target Audience Everyone working on python Comparison (UPDATED) Our closest comparison will be vulture. We have a benchmark which we created. We tried to make it as realistic as possible, trying to mimic what a repo might look like. The logic and explanation behind the benchmark can be found here https://github.com/duriantaco/skylos-demo/blob/main/README.md Links / where to follow up Website: https://skylos.dev Discord (support/bugs/features request): https://discord.gg/Ftn9t9tErf Repo: https://github.com/duriantaco/skylos Demo Repo: https://github.com/duriantaco/skylos-demo Docs: https://docs.skylos.dev/ (We're currently updating this so there might be some downtime) Happy to take any constructive criticism/feedback. We do take all your feedback seriously and will continue to improve our engine. The reason why we have not expanded into other languages is because we're trying to make sure we reduce false positives as much as possible and we can only do it with your help. We'd love for you to try out the stuff above. If you try it and it breaks or is annoying, let us know via discord. We recently created the discord channel for more real time feedback. And give it a star if you found it useful. Last but not least, if you'll like your repo cleaned, do drop us a discord or email us at [founder@skylos.dev](mailto:founder@skylos.dev) . We'll be happy to work together with you. Thank you! submitted by /u/papersashimi [link] [comments]
- [Project] Built auth for Dagster using monkey-patching, GraphQL AST parsing, and resilient UI injectby /u/maltzsama (Python) on February 14, 2026 at 10:20 pm
TL;DR: Made a Python package that adds authentication to Dagster (data orchestration tool) without touching its source code. Uses Starlette middleware monkey-patching, official GraphQL parser for RBAC, Peewee ORM, and defensive UI injection with fallbacks. ~3k lines, Apache 2.0, beta but production-tested. The Problem Dagster is a data orchestration framework (think Airflow alternative) that ships with zero authentication. Anyone who can reach your webserver is admin. I needed auth for a self-hosted deployment but didn't want to: Fork Dagster Wait for official OSS auth Rebuild everything when Dagster updates Solution: Wrapper that monkey-patches Dagster's internals and injects auth without modifying their code. Architecture Overview # dagster_authkit/core/patch.py (simplified) def apply_patches(): """Monkey-patch Dagster webserver to inject auth""" import dagster_webserver.webserver as webserver_module from starlette.middleware import Middleware # PATCH 1: Inject middleware original_build_middleware = webserver_module.DagsterWebserver.build_middleware def patched_build_middleware(self): middlewares = original_build_middleware(self) middlewares.insert(0, Middleware(DagsterAuthMiddleware)) return middlewares webserver_module.DagsterWebserver.build_middleware = patched_build_middleware # PATCH 2: Add auth routes original_build_routes = webserver_module.DagsterWebserver.build_routes def patched_build_routes(self): routes_list = original_build_routes(self) auth_routes = create_auth_routes() # /auth/login, /auth/logout routes_list.insert(0, Mount("/auth", routes=auth_routes.routes)) return routes_list webserver_module.DagsterWebserver.build_routes = patched_build_routes Why this works: Dagster uses Starlette internally. I'm just inserting my middleware before theirs runs. GraphQL RBAC - The Hard Part Dagster's entire API is GraphQL. I need to: Detect mutations (vs queries) Extract mutation names Block based on user role First Attempt: Regex (Failed) # ❌ BROKEN - False positives from comments def has_mutation_regex(query: str) -> bool: return bool(re.search(r'\bmutation\b', query, re.IGNORECASE)) # This triggers: query = """ # This is a mutation example query { assets { name } } """ # Returns True even though it's a query, not mutation Real Solution: Official GraphQL Parser # dagster_authkit/core/graphql_analyzer.py (ACTUAL CODE) from graphql import parse, OperationDefinitionNode, FieldNode from typing import Set class GraphQLMutationAnalyzer: def extract_mutation_names(query: str) -> Set[str]: """Extract ALL mutation field names using AST parsing""" try: ast = parse(query) # Official graphql-core parser mutations = set() for definition in ast.definitions: if not isinstance(definition, OperationDefinitionNode): continue if definition.operation.value != "mutation": continue # Extract top-level mutation fields for selection in definition.selection_set.selections: if isinstance(selection, FieldNode): mutations.add(selection.name.value) return mutations except Exception as e: logger.warning(f"Failed to parse GraphQL: {e}") return {"__UNPARSEABLE_QUERY__"} # Deny unparseable queries RBAC Permission Mapping # dagster_authkit/auth/backends/base.py (ACTUAL CODE) class RolePermissions: """Maps GraphQL mutations to required roles""" LAUNCHER_MUTATIONS = frozenset({ "launchRun", "terminateRun", "deleteRun", }) EDITOR_MUTATIONS = frozenset({ "startSchedule", "stopSensor", "wipeAssets", "launchPartitionBackfill", }) ADMIN_MUTATIONS = frozenset({ "reloadWorkspace", "shutdownRepositoryLocation", }) u/classmethod def get_required_role(cls, mutation_name: str) -> Optional[Role]: """Get minimum role needed for a mutation""" if mutation_name in cls.LAUNCHER_MUTATIONS: return Role.LAUNCHER elif mutation_name in cls.EDITOR_MUTATIONS: return Role.EDITOR elif mutation_name in cls.ADMIN_MUTATIONS: return Role.ADMIN return None # Public mutations Middleware Integration # dagster_authkit/core/middleware.py (REAL CODE, SIMPLIFIED) class DagsterAuthMiddleware(BaseHTTPMiddleware): async def dispatch(self, request: Request, call_next): # Get authenticated user (session or proxy headers) user = self._get_authenticated_user(request) if not user: return RedirectResponse("/auth/login") # GraphQL RBAC check if request.url.path == "/graphql" and request.method == "POST": body = await request.body() graphql_data = json.loads(body.decode("utf-8")) query_str = graphql_data.get("query", "") mutation_names = GraphQLMutationAnalyzer.extract_mutation_names(query_str) for mutation_name in mutation_names: required_role = RolePermissions.get_required_role(mutation_name) if required_role and not user.can(required_role): # Return Dagster-style error return Response( content=json.dumps({ "data": {mutation_name: { "__typename": "PythonError", "message": f"Access Denied: {required_role.name} required" }} }), status_code=200, # GraphQL always 200 media_type="application/json" ) # Attach user to request request.state.user = user return await call_next(request) Multi-Backend System (Entry Points) Instead of hardcoding backends, I use setuptools entry points: # pyproject.toml (ACTUAL CONFIG) [project.entry-points."dagster_auth.backends"] dummy = "dagster_authkit.auth.backends.dummy:DummyAuthBackend" sql = "dagster_authkit.auth.backends.sql:PeeweeAuthBackend" ldap = "dagster_authkit.auth.backends.ldap:LDAPAuthBackend" proxy = "dagster_authkit.auth.backends.proxy:ProxyAuthBackend" # dagster_authkit/core/registry.py (ACTUAL CODE) from importlib.metadata import entry_points class BackendRegistry: u/classmethod def discover_backends(cls): """Auto-discover backends via entry points""" discovered = entry_points(group="dagster_auth.backends") for entry_point in discovered: backend_class = entry_point.load() cls._backends[entry_point.name] = backend_class logger.info(f"✅ Registered backend: {entry_point.name}") Benefit: Users can create custom backends without modifying dagster-authkit. Proxy Auth Backend (Authelia Integration) The newest backend (v0.3.0) reads user info from HTTP headers set by upstream proxies: # dagster_authkit/auth/backends/proxy.py (REAL CODE) class ProxyAuthBackend(AuthBackend): """Reads auth from Authelia/OAuth2 Proxy headers""" def get_user_from_headers(self, headers: Dict[str, str]) -> Optional[AuthUser]: """Extract user from HTTP headers""" username = headers.get(self.user_header) # Remote-User if not username: return None # Parse groups (handles multiple formats) groups_raw = headers.get(self.groups_header, "") # Remote-Groups groups = self._parse_groups_header(groups_raw) # Map groups to role role = self._determine_role_from_groups(groups) return AuthUser( username=username, role=role, email=headers.get(self.email_header, ""), full_name=headers.get(self.name_header, "") or username ) def _parse_groups_header(self, groups_raw: str) -> List[str]: """Robust group parsing with multiple fallback strategies""" if not groups_raw: return [] # Strategy 1: JSON array if groups_raw.startswith("["): try: return json.loads(groups_raw) except: pass # Strategy 2: LDAP DNs (preserve internal commas) if "ou=" in groups_raw.lower() or "dc=" in groups_raw.lower(): # Split by semicolon or pipe, NOT comma for delimiter in [";", "|"]: if delimiter in groups_raw: return [g.strip() for g in groups_raw.split(delimiter)] return [groups_raw] # Single LDAP DN # Strategy 3: CSV return [g.strip() for g in groups_raw.split(",")] Why proxy mode? Authelia already handles SSO, LDAP, OAuth, 2FA. I just do RBAC. UI Injection - Resilient Strategy Injecting into Dagster's React app is fragile (CSS selectors change). Solution: multiple fallback selectors + safe mode. // dagster_authkit/utils/templates.py -> JavaScript (REAL CODE) function injectUserMenu() { // Try multiple selectors (version compatibility) const selectors = [ 'div[class*="MainNavigation_group"]', // Dagster 1.12+ 'div[class*="NavigationGroup"]', // Dagster 1.10-1.11 'nav[class*="Navigation"]', // Generic fallback 'div[role="navigation"]', // Accessibility fallback ]; let sidebarGroup = null; for (const selector of selectors) { const elements = document.querySelectorAll(selector); if (elements.length > 0) { sidebarGroup = elements[elements.length - 1]; break; } } if (!sidebarGroup) { if (retryCount < MAX_RETRIES) { retryCount++; setTimeout(injectUserMenu, 500); // Retry return; } // SAFE MODE: Top-right corner fallback activateSafeMode(); return; } // Clone native Dagster button classes for styling const itemButton = sidebarGroup.querySelector('button[class*="itemButton"]'); createUserMenu(sidebarGroup, itemButton.className); } function activateSafeMode() { // Fallback: Top-right corner menu const fallbackMenu = document.createElement('div'); fallbackMenu.id = 'authkit-safe-mode-menu'; fallbackMenu.innerHTML = ` <div class="authkit-safe-avatar">${user.initial}</div> <div class="authkit-safe-info">${user.username} (${user.role})</div> <a href="/auth/logout">Sign Out</a> `; document.body.appendChild(fallbackMenu); } Safe mode saved production deployments when Dagster 1.11 → 1.12 changed CSS classes. Session Management (Redis vs Stateless) # dagster_authkit/auth/session.py (ACTUAL CODE) class SessionManager: def __init__(self): redis_url = os.getenv("DAGSTER_AUTH_REDIS_URL") if redis_url: self.backend = RedisBackend(redis_url, max_age=86400) else: self.backend = CookieBackend(secret_key, max_age=86400) def revoke_all(self, username: str) -> int: """Revoke all sessions for user (password change, etc)""" return self.backend.revoke_all(username) # Redis Backend (distributed) class RedisBackend: def create(self, user_data: Dict) -> str: token = secrets.token_urlsafe(32) self.client.setex(f"sess:{token}", self.max_age, json.dumps(user_data)) self.client.sadd(f"user_sess:{user_data['username']}", token) return token def revoke_all(self, username: str) -> int: """Atomic multi-session revocation""" tokens = self.client.smembers(f"user_sess:{username}") for t in tokens: self.client.delete(f"sess:{t}") return self.client.delete(f"user_sess:{username}") Critical for password changes: Must invalidate ALL sessions across all pods. Compatibility Detection # dagster_authkit/core/detection_layer.py (ACTUAL CODE) def verify_dagster_api_compatibility(): """Check if Dagster's internal API changed""" try: import dagster_webserver.webserver as webserver_module except ImportError: return False, "dagster_webserver not installed" # Check critical exports if not hasattr(webserver_module, "DagsterWebserver"): return False, "DagsterWebserver class not found - API changed!" # Check critical methods webserver_class = webserver_module.DagsterWebserver required_methods = ["build_middleware", "build_routes"] missing = [m for m in required_methods if not hasattr(webserver_class, m)] if missing: return False, f"Missing methods: {missing} - Dagster API changed!" return True, None Runs on startup. Prevents silent failures when Dagster updates. Lessons Learned 1. Monkey-patching requires extensive fallbacks CSS selectors break silently Safe mode = production saver Version detection is critical 2. GraphQL AST > Regex Comments don't break parsing Get mutation names for granular permissions Better error messages 3. Entry points > hardcoded backends Users can add custom auth No fork needed Clean plugin architecture 4. Redis essential for multi-pod In-memory sessions don't work in K8s Atomic operations prevent race conditions Pub/sub for global session revocation 5. Production users find edge cases LDAP DN parsing needed 3 fallback strategies Safe mode UI saved multiple deployments Proxy mode exists because users already had Authelia Project Info Python: 3.10+ (tested on 3.10, 3.11, 3.12) License: Apache 2.0 Status: Beta (production-tested) Core Dependencies: dagster >= 1.10.0 starlette >= 0.52.1 peewee >= 3.19.0 (ORM) graphql-core >= 3.2.7 (AST parsing) itsdangerous >= 2.2.0 (session signing) Optional: bcrypt >= 5.0.0 (password hashing) redis >= 7.1.0 (distributed sessions) ldap3 >= 2.9.0 (LDAP backend) psycopg2-binary / mysql-connector-python (database drivers) Questions for r/Python Architecture: Is monkey-patching Starlette middleware a reasonable approach? Better alternatives to entry_points() for plugin discovery? GraphQL: Are there edge cases in mutation detection I'm missing? Better permission patterns than mutation name mapping? Testing: How do you test monkey-patched code across versions? Strategies for CSS selector compatibility testing? Multi-tenancy: Peewee vs SQLAlchemy for multi-backend abstraction? Better session backend architecture? Links GitHub: https://github.com/maltzsama/dagster-authkit PyPI: https://pypi.org/project/dagster-authkit/ Examples: Full Docker Compose + Kubernetes setups This started as "I need auth for Dagster this weekend" and turned into it. Open to feedback, criticism, and PRs. Philosophy: Funciona > Perfeito (Works > Perfect) submitted by /u/maltzsama [link] [comments]
- What would you want in a modern Python testing framework?by /u/SideQuest2026 (Python) on February 14, 2026 at 7:29 pm
Tools like uv and ruff have shown us what is possible when we take the time to rethink Python tooling, as well as implement parts in Rust for speed improvements. What would you, the community, want to see in a modern Python testing framework that could be a successor to the tried and true pytest? Some off the cuff ideas I think of: * Fast test discovery via Rust * Explicit fixture import (no auto discoverable conftest.py magic) * Monorepo / workspace support * Built-in parallel test execution * Built-in asyncio support submitted by /u/SideQuest2026 [link] [comments]
- Created this 10 min Video for people setting up their first Azure Function for Python using Model V2by /u/ConsiderationBig4682 (Python) on February 14, 2026 at 7:27 pm
https://youtu.be/EmCjAEXjtm4?si=RvqnWR1BAAd4z3jG I recently had to set up Azure Functions with Python and realized many resources still point to the older programming model (including my own tutorial from 3 years back). Recorded a 10-minute video showing the end-to-end setup for the v2 model in case it saves someone else some time. Open to any feedback/criticism. Still learning and trying to make better technical walkthroughs as this is only my 4th or 5th video. submitted by /u/ConsiderationBig4682 [link] [comments]
- PLPM - Pacman-Like Package Manager. Alternative to WinGet on Windowsby /u/wCupped (Python) on February 14, 2026 at 6:13 pm
What my project does and why I created it The main reason why I wanted to make it - My friend suggested me to make this utility because there are not really many apps on WinGet repositories. This project is more than a hobby project, but if You will make any contribution to my repository of the utility and the repository of apps, it will be really appreciated. This utility has main aspects of package manager except removing, anyone who will help with it are going also be appreciated <3 Target audience The project is more like hobby project for now than something serious. But if You wanna change it, You're welcome, all PRs are appreciated Why does it have potential to be better than WinGet? It's written on Python, so it actually can be easier in expanding, there are also going to be way more apps in repositories of this app, utility is brand new, created recently and can be good for your first issue. It also doesn't have any telemetry collecting so you don't really need to worry if you're paranoic 😉 Utility: https://github.com/wcupped/plpm-py Apps repository: https://github.com/wcupped/plpm-repo submitted by /u/wCupped [link] [comments]
- Update: copier-astral now uses prek (faster pre-commit) + bug fixes from your feedbackby /u/_ritwiktiwari (Python) on February 14, 2026 at 5:56 pm
Two weeks ago I shared copier-astral here and the response was incredible — thank you! The feedback helped me find and fix real bugs. What's new since last post: Fixed github_username not being set during installation Fixed uv tool inject bug Fixed missing ty dependency in generated projects Replaced pre-commit with prek — a faster Rust-based alternative Added pysentry-rs and semgrep to scan for potential vulnerabilities Now at 100+ stars Quick reminder — what it does: Scaffolds a complete Python project with modern tooling pre-configured: ruff for linting + formatting (replaces black, isort, flake8) ty for type checking (Astral's new Rust-based type checker) pytest + hatch for testing (including multi-version matrix) MkDocs with Material theme + mkdocstrings pre-commit hooks with prek GitHub Actions CI/CD Docker support Typer CLI scaffold (optional) git-cliff for auto-generated changelogs Looking for contributors: 3 open issues if anyone wants to help out: https://github.com/ritwiktiwari/copier-astral/issues Thanks again — happy to answer any questions! Links: GitHub: https://github.com/ritwiktiwari/copier-astral Docs: https://ritwiktiwari.github.io/copier-astral/ Reddit: Previous Post submitted by /u/_ritwiktiwari [link] [comments]
- MCGrad – Fix Machine Learning model calibration in subgroups (Open Source from Meta)by /u/TaXxER (Python) on February 14, 2026 at 5:21 pm
Hi r/python , We’re open-sourcing MCGrad, a Python machine learning package for multicalibration–developed and deployed in production at Meta. This work will also be presented at KDD 2026. What My Project Does The Problem: A model can be globally calibrated yet significantly miscalibrated within identifiable subgroups or feature intersections (e.g., "users in region X on mobile devices"). Multicalibration aims to ensure reliability across such subpopulations. Our tutorial notebook illustrates this in detail. The Solution: MCGrad reformulates multicalibration using gradient boosted decision trees. At each step, a lightweight booster learns to predict residual miscalibration of the base model given the features, automatically identifying and correcting miscalibrated regions. The method scales to large datasets, and uses early stopping to preserve predictive performance. Target Audience MCGrad is meant for ML engineers and researchers in industry and academia. Comparison MCGrad offers key advantages over alternatives that make it ideal for production environments: Implicit Subgroups: It enables multicalibration across a vast number of subgroups without needing them to be manually specified or maintained. Safety First: It features built-in safety mechanisms to prevent overfitting or degrading the base model's performance. Scalability: It relies on optimized ML libraries under the hood, making it fast and scalable for large datasets. Links: Repo: https://github.com/facebookincubator/MCGrad/ Docs: https://mcgrad.dev/ Install via pip install mcgrad. Happy to answer questions or discuss details. submitted by /u/TaXxER [link] [comments]
- I used LangGraph and Beautifulsoup to build a 3D-visualizing research agentby /u/FickleSwordfish8689 (Python) on February 14, 2026 at 2:14 pm
Hello everyone, What My Project Does: I’ve been working on Prism AI, an open-source research agent. While there are many "wrappers" out there, I wanted to build a deep research tool that uses Python to manage complex state transitions and recursive scraping, then outputs interactive 3D visualizations so you can "map" the research instead of just reading text. The core of the project is a Python-based AI worker that handles the heavy lifting: LangGraph: Used to manage the agent's state machine. I found that standard linear chains were failing for deep research, so I implemented a cyclical graph that allows the agent to self-correct, refine search queries, and verify findings. BeautifulSoup: For high-fidelity web scraping. It’s optimized to bypass simple bot detection and extract clean markdown from dense research papers. Pydantic: All data extraction is strictly typed to ensure the downstream 3D visualizer (built with Go and Next.js) receives structured JSON without "hallucinated" keys. The Python worker communicates via a task queue with a Go-based real-time server to stream these visualizations to the client. Target Audience: Python devs interested in AI Agent Orchestration (LangGraph/LangChain). Researchers who want a self-hosted alternative to proprietary AI tools. Anyone looking for a practical example of a Python + Go microservice architecture. Comparison: Compared to standard RAG (Retrieval-Augmented Generation) setups, Prism AI focuses on autonomous discovery. Most Python agents stop at a list of links; this agent uses recursive loops to build a relationship map. I’ve focused heavily on the Python side to solve the "looping" problem where agents get stuck in a research rut. Other infos: The project is fully open-source and easy to spin up with Docker. I’m especially curious about what you guys think of my state management logic in the Python worker handling persistent research states across multiple cycles was the hardest part of this build. Project link: https://github.com/precious112/prism-ai-deep-research submitted by /u/FickleSwordfish8689 [link] [comments]
- Design feedback on an open-source finance library (API structure + scope)by /u/polarkyle19 (Python) on February 14, 2026 at 12:35 pm
Hey folks, I’m building an open-source Python library called InvestorMate focused on stock analysis (fundamentals, indicators, screening, portfolio analytics, optional AI layer). I’m at a point where I’d really value architectural feedback rather than feature ideas. Specifically: • For a library like this, would you keep it opinionated and batteries-included, or split it into smaller modular subpackages? • How do you decide when scope becomes too broad for a single PyPI package? • What signals make a data/finance library feel production-ready to you (tests, API stability, versioning discipline, type hints, performance benchmarks, etc.)? • For projects that sit “above” data providers (like yfinance), what builds trust in abstraction layers? Roadmap here for context: https://github.com/siddartha19/investormate/blob/main/ROADMAP.md Not looking for promotion. Genuinely trying to design this in a way that fits Python ecosystem norms and doesn’t become an unmaintainable monolith. Would appreciate perspective from folks who’ve maintained or contributed to medium/large OSS libraries. submitted by /u/polarkyle19 [link] [comments]























96DRHDRA9J7GTN6