DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)
Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com
AI Jobs and Career
I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
- Full Stack Engineer [$150K-$220K]
- Software Engineer, Tooling & AI Workflow, Contract [$90/hour]
- DevOps Engineer, India, Contract [$90/hour]
- More AI Jobs Opportunitieshere
| Job Title | Status | Pay |
|---|---|---|
| Full-Stack Engineer | Strong match, Full-time | $150K - $220K / year |
| Developer Experience and Productivity Engineer | Pre-qualified, Full-time | $160K - $300K / year |
| Software Engineer - Tooling & AI Workflows (Contract) | Contract | $90 / hour |
| DevOps Engineer (India) | Full-time | $20K - $50K / year |
| Senior Full-Stack Engineer | Full-time | $2.8K - $4K / week |
| Enterprise IT & Cloud Domain Expert - India | Contract | $20 - $30 / hour |
| Senior Software Engineer | Contract | $100 - $200 / hour |
| Senior Software Engineer | Pre-qualified, Full-time | $150K - $300K / year |
| Senior Full-Stack Engineer: Latin America | Full-time | $1.6K - $2.1K / week |
| Software Engineering Expert | Contract | $50 - $150 / hour |
| Generalist Video Annotators | Contract | $45 / hour |
| Generalist Writing Expert | Contract | $45 / hour |
| Editors, Fact Checkers, & Data Quality Reviewers | Contract | $50 - $60 / hour |
| Multilingual Expert | Contract | $54 / hour |
| Mathematics Expert (PhD) | Contract | $60 - $80 / hour |
| Software Engineer - India | Contract | $20 - $45 / hour |
| Physics Expert (PhD) | Contract | $60 - $80 / hour |
| Finance Expert | Contract | $150 / hour |
| Designers | Contract | $50 - $70 / hour |
| Chemistry Expert (PhD) | Contract | $60 - $80 / hour |
How do you make a Python loop faster?
Programmers are always looking for ways to make their code more efficient. One way to do this is to use a faster loop. Python is a high-level programming language that is widely used by developers and software engineers. It is known for its readability and ease of use. However, one downside of Python is that its loops can be slow. This can be a problem when you need to process large amounts of data. There are several ways to make Python loops faster. One way is to use a faster looping construct, such as C. Another way is to use an optimized library, such as NumPy. Finally, you can vectorize your code, which means converting it into a format that can be run on a GPU or other parallel computing platform. By using these techniques, you can significantly speed up your Python code.
According to Vladislav Zorov, If not talking about NumPy or something, try to use list comprehension expressions where possible. Those are handled by the C code of the Python interpreter, instead of looping in Python. Basically same idea like the NumPy solution, you just don’t want code running in Python.
Example: (Python 3.0)

Python list traversing tip:
Instead of this: for i in range(len(l)): x = l[i]
Use this for i, x in enumerate(l): …
TO keep track of indices and values inside a loop.
Twice faster, and the code looks better.
Finally, developers can also improve the performance of their code by making use of caching. By caching values that are computed inside a loop, programmers can avoid having to recalculate them each time through the loop. By taking these steps, programmers can make their Python code more efficient and faster.
Very Important: Don’t worry about code efficiency until you find yourself needing to worry about code efficiency.
The place where you think about efficiency is within the logic of your implementations.
This is where “big O” discussions come in to play. If you aren’t familiar, here is a link on the topic
What are the top 10 Wonders of computing and software engineering?

Do you want to learn python we found 5 online coding courses for beginners?
Python Coding Bestsellers on Amazon
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Find Your AI Dream Job on Mercor
Your next big opportunity in AI could be just a click away!
https://amzn.to/3s3KXc3
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
The Best Python Coding and Programming Bootcamps
We’ve also included a scholarship resource with more than 40 unique scholarships to provide additional financial support.
Python Coding Breaking News
- Pandas 3.0.0 is thereby /u/Deux87 (Python) on January 21, 2026 at 4:16 pm
So finally the big jump to 3 has been done. Anyone has already tested in beta/alpha? Any major breaking change? Just wanted to collect as much info as possible 😀 submitted by /u/Deux87 [link] [comments]
- A refactor-safety tool for Python projects – Arbor v1.4 adds a GUIby /u/AccomplishedWay3558 (Python) on January 21, 2026 at 3:30 pm
Arbor is a static impact-analysis tool for Python. It builds a call/import graph so you can see what breaks *before* a refactor — especially in large, dynamic codebases where types/tests don’t always catch structural changes. What it does: • Indexes Python files and builds a dependency graph • Shows direct + transitive callers of any function/class • Highlights risky changes with confidence levels • Optional GUI for quick inspection Target audience: Teams working in medium-to-large Python codebases (Django/FastAPI/data pipelines) who want fast, structural dependency insight before refactoring. Comparison: Unlike test suites (behavior) or JetBrains inspections (local), Arbor gives a whole-project graph view and explains ripple effects across files. Repo: https://github.com/Anandb71/arbor Would appreciate feedback from Python users on how well it handles your project structure. submitted by /u/AccomplishedWay3558 [link] [comments]
- dltype v0.9.0 now with jax supportby /u/onyx-zero-software (Python) on January 21, 2026 at 2:14 pm
Hey all, just wanted to give a shout out to my project dltype. I posted on here about it a while back and have made a number of improvements. What my project does: Dltype is a lightweight runtime shape and datatype checking library that supports numpy arrays, torch tensors, and now Jax arrays. It supports function arguments, returns, dataclasses, named tuples, and pydantic models out of the box. Just annotate your type and you're good to go! Example: ```python @dltype.dltyped() def func( arr: Annotated[jax.Array, dltype.FloatTensor["batch c=2 3"]], ) -> Annotated[jax.Array, dltype.FloatTensor["3 c batch"]]: return arr.transpose(2, 1, 0) func(jax.numpy.zeros((1, 2, 3), dtype=np.float32)) # raises dltype.DLTypeShapeError func(jax.numpy.zeros((1, 2, 4), dtype=np.float32)) ``` Source code link: https://github.com/stackav-oss/dltype Let me know what you think! I'm mostly just maintaining this in my free time but if you find a feature you want feel free to file a ticket. submitted by /u/onyx-zero-software [link] [comments]
- Deb Nicholson of PSF on Funding Python's Futureby /u/WalrusOk4591 (Python) on January 21, 2026 at 2:03 pm
In this talk, Deb Nicholson, Executive Director of the r/python Software Foundation, explores what it takes to fund Python’s future amid explosive growth, economic uncertainty, and rising demands on r/opensource infrastructure. She explains why traditional nonprofit funding models no longer fit tech foundations, how corporate relationships and services are evolving, and why community, security, and sustainability must move together. The discussion highlights new funding approaches, the impact of layoffs and inflation, and why sustained investment is essential to keeping Python—and its global community—healthy and thriving. https://youtu.be/leykbs1uz48 submitted by /u/WalrusOk4591 [link] [comments]
- chithi-dev,an Encrypted file sharing platform with zero trust server mindsetby /u/BasePlate_Admin (Python) on January 21, 2026 at 1:57 pm
I kept on running into an issue where i needed to host some files on my server and let others download at their own time, but the files should not exist on the server for an indefinite amount of time. So i built an encrypted file/folder sharing platform with automatic file eviction logic. What My Project Does: Allows users to upload files without sign up. Automatic File eviction from the s3 (rustfs) storage. Client side encryption, the server is just a dumb interface between frontend and the s3 storage. Comparison: Customizable limits from the frontend ui (which is not present in firefox send) Future support for CLI and TUI Anything the community desires Target Audience People interested in hosting their own instance of a private file/folder sharing platform People that wants to self-host a more customizable version of firefox send or its Tim Visée fork Check it out at: https://chithi.dev Github Link: https://github.com/chithi-dev/chithi Admin UI Pictures: Image 1 Image 2 Image 3 Please do note that the public server is running from a core 2 duo with 4gb RAM with a 250Mbps uplink with a 50GB sata2 ssd(quoted by rustfs), shared with my home connection that is running a lot of services. Thanks for reading! Happy to have any kind of feedbacks 🙂 For anyone wondering about some fancy fastapi things i implemented in the project - Global Ratelimiter via Depends: Guards and decorator - Chunked S3 Uploads submitted by /u/BasePlate_Admin [link] [comments]
- python venv problemsby /u/gargamel9a (Python) on January 21, 2026 at 1:36 pm
In the folder: ComfyUI_windows_portable\Wan2GP> I type: python --version and returns... Python 3.12.10 then python -m venv -h returns.... No module named venv Any idea what is happening? submitted by /u/gargamel9a [link] [comments]
- I built a runtime to sandbox untrusted Python code using WebAssemblyby /u/Tall_Insect7119 (Python) on January 21, 2026 at 1:08 pm
Hi everyone, I've been working on a runtime to isolate untrusted Python code using WebAssembly sandboxes. What My Project Does Basically, it protects your host system from problems that untrusted code can cause. You can set CPU limits (with compute), memory, filesystem access, and retries for each part of your code. It works with simple decorators: from capsule import task @task( name="analyze_data", compute="MEDIUM", ram="512mb", allowed_files=["./authorized-folder/"], timeout="30s", max_retries=1 ) def analyze_data(dataset: list) -> dict: """Process data in an isolated, resource-controlled environment.""" # Your code runs safely in a WASM sandbox return {"processed": len(dataset), "status": "complete"} Then run it with: capsule run main.py Target Audience This is for developers working with untrusted code. My main focus is AI agents since that's where it's most useful, but it might work for other scenarios too. Comparison A few weeks ago, I made a note on sandboxing untrusted python that explains this in detail. Except for containerization tools, not many simple local solutions exist. Most projects are focused on cloud-based solutions for many reasons. Since wasm is light and works on any OS, making it work locally feels natural. It's still quite early, so the main limitation is that libraries like numpy and pandas (which rely on C extensions) aren't supported yet. Links GitHub: https://github.com/mavdol/capsule PyPI: pip install capsule-run I’m curious to hear your thoughts on this approach! submitted by /u/Tall_Insect7119 [link] [comments]
- Convert your bear images into bear images: Bear Right Backby /u/JizosKasa (Python) on January 21, 2026 at 10:39 am
What My Project Does bearrb is a Python CLI tool that takes two images of bears (a source and a target) and transforms the source into a close approximation of the target by only rearranging pixel coordinates. No pixel values are modified, generated, blended, or recolored, every original pixel is preserved exactly as it was. The algorithm computes a permutation of pixel positions that minimizes the visual difference from the target image. repo: https://github.com/JoshuaKasa/bearrb Target Audience This is obviously a toy / experimental project, not meant for production image editing. It's mainly for: people interested in algorithmic image processing optimization under hard constraints weird/fun CLI tools math-y or computational art experiments Comparison Most image tools try to be useful and correct... bearrb does not. Instead of editing, filtering, generating, or enhancing images, bearrb just takes the pixels it already has and throws them around until the image vaguely resembles the other bear submitted by /u/JizosKasa [link] [comments]
- I really enjoy Python compared to other coding I've doneby /u/JeffTheMasterr (Python) on January 21, 2026 at 7:48 am
I've been using Python for a while now and it's my main language. It is such a wonderful language. Guido had wonderful design choices in forcing whitespace to disallow curly braces and discouraging semicolons so much I almost didn't know they existed. There's even a synonym for beautiful; it's called pythonic. I will probably not use the absolute elephant dung that is NodeJS ever again. Everything that JavaScript has is in Python, but better. And whatever exists in JS but not Python is because it didn't need to exist in Python because it's unnecessary. For example, Flask is like Express but better. I'm not stuck in callback hell or dependency hell. The only cross-device difference I've faced is sys.exit working on Linux but not working on Windows. But in web development, you gotta face vendor prefixes, CSS resets, graceful degradation, some browsers not implementing standards right, etc. Somehow, Python is more cross platform than the web is. Hell, Python even runs on the web. I still love web development though, but writing Python code is just the pinnacle of wonderful computer experiences. This is the same language where you can make a website, a programming language, a video game (3d or 2d), a web scraper, a GUI, etc. Whenever I find myself limited, it is never implementation-wise. It's never because there aren't enough functions. I'm only limited by my (temporary) lack of ideas. Python makes me love programming more than I already did. But C, oh, C is cool but a bit limiting IMO because all the higher level stuff you take for granted like lists and whatever aren't there, and that wastes your time and kind of limits what you can do. C++ kinda solves this with the <vector> module but it is still a hassle implementing stuff compared to Python, where it's very simple to just define a list like [1,2,3] where you can easily add more elements without needing a fixed size. The C and C++ language's limitations make me heavily appreciate what Python does, especially as it is coded in C. submitted by /u/JeffTheMasterr [link] [comments]
- I’ve been working on an “information-aware compiler” for neural networks (with a Python CLI)by /u/Lower_Painting1036 (Python) on January 21, 2026 at 5:38 am
I’ve been working on a research project called Information Transform Compression (ITC), a compiler that treats neural networks as information systems, not parameter graphs, and optimises them by preserving information value rather than numerical fidelity. Github Repo: https://github.com/makangachristopher/Information-Transform-Compression What this project does. ITC is a compiler-style optimization system for neural networks that analyzes models through an information-theoretic lens and systematically rewrites them into smaller, faster, and more efficient forms while preserving their behavior. It parses networks into an intermediate representation, measures per-layer information content using entropy, sensitivity, and redundancy, and computes an Information Density Metric (IDM) to guide optimizations such as adaptive mixed-precision quantization, structural pruning, and architecture-aware compression. By focusing on compressing the least informative components rather than applying uniform rules, ITC achieves high compression ratios with predictable accuracy, producing deployable models without retraining or teacher models, and integrates seamlessly into standard PyTorch workflows for inference. The motivation: Most optimization tools in ML (quantization, pruning, distillation) treat all parameters as roughly equal. In practice, they aren’t. Some parts of a model carry a lot of meaning, others are largely redundant, but we don’t measure that explicitly. The idea: ITC treats a neural network as an information system, not just a parameter graph. Comparison with existing alternatives Other ML optimisation tools answer: “How many parameters can we remove?” ITC answers: “How much information does this part of the model need to preserve?” That distinction turns compression into a compiler problem, not a post-training hack. To do this, the system computes per-layer (and eventually per-substructure) measures of: Entropy (how diverse the information is), Sensitivity (how much output changes if it’s perturbed), Redundancy (overlap with other parts), and combines them into a single score called Information Density (IDM). That score then drives decisions like: Mixed-precision quantization (not uniform INT8), Structural pruning (not rule-based), Architecture-aware compression. Conceptually, it’s closer to a compiler pass than a post-training trick. Target Audience ITC is production-ready, even though it is not yet a drop-in production replacement for established toolchains. It is best suited for: Researchers exploring model compression, efficiency, or information theory Engineers working on edge deployment, constrained inference, or model optimization Developers interested in compiler-style approaches to ML systems The current implementation is: Stable and usable via CLI and Python API Suitable for experimentation, benchmarking, and integration into research pipelines Intended as a foundation for future production-grade tooling rather than a finished product submitted by /u/Lower_Painting1036 [link] [comments]
- Tracking 13,000 satellites in under 3 seconds from Pythonby /u/Frozen_Poseidon (Python) on January 20, 2026 at 10:06 pm
I've been working on https://github.com/ATTron/astroz, an orbital mechanics toolkit with Python bindings. The core is written in Zig with SIMD vectorization. What My Project Does astroz is an astrodynamics toolkit, including propagating satellite orbits using the SGP4 algorithm. It writes directly to numpy arrays, so there's very little overhead going between Python and Zig. You can propagate 13,000+ satellites in under 3 seconds. pip install astroz is all you need to get started! Target Audience Anyone doing orbital mechanics, satellite tracking, or space situational awareness work in Python. It's production-ready. I'm using it myself and the API is stable, though I'm still adding more functionality to the Python bindings. Comparison It's about 2-3x faster than python-sgp4, far and away the most popular sgp4 implementation being used: Library Throughput astroz ~8M props/sec python-sgp4 ~3M props/sec Demo & Links If you want to see it in action, I put together a live demo that visualizes all 13,000+ active satellites generated from Python in under 3 seconds: https://attron.github.io/astroz-demo/ Also wrote a blog post about how the SIMD stuff works under the hood if you're into that, but it's more Zig heavy than Python: https://atempleton.bearblog.dev/i-made-zig-compute-33-million-satellite-positions-in-3-seconds-no-gpu-required/ Repo: https://github.com/ATTron/astroz submitted by /u/Frozen_Poseidon [link] [comments]
- hololinked: pythonic beginner friendly IoT and data acquisition runtime written fully in pythonby /u/Positive-Thing6850 (Python) on January 20, 2026 at 7:21 pm
Hi guys, I would like to introduce the Python community to my pythonic IoT and data acquisition runtime fully written in python - https://github.com/hololinked-dev/hololinked What My Project Does You can expose your hardware on the network, in a systematic manner over multiple protocols for multiple use cases, with lesser code reusing familiar concepts found in web development. Characteristics Protocol and codec/serialization agnostic Extensible & Interoperable fast, uses all CPP or rust components by default pythonic & meant for pythonistas and beginners Rich JSON based standardized metadata reasonable learning curve FOSS Currently supported: Protocols - HTTP, MQTT & ZMQ Serialization/codecs - JSON, Message Pack Security - username-password (bcrypt, argon2), API key, OAuth OIDC flow is being added. Only HTTP supports security definitions. MQTT accepts broker username and password. W3C Web of Things metadata - https://www.w3.org/WoT/, https://www.w3.org/TR/wot-thing-description11/ Production grade logging with structlog Interactions with your devices properties (read-write values) actions (invokable/commandable) events (asynchronous i.e. pub-sub for alarms, data streaming etc.) finite state machine Target Audience One can use it in science or electronics labs, hobbies, home automation, remote data logging, web applications, data science, etc. I based the implementation on the work going on in physics labs over the last 10 years and my own web development work. If you are a beginner, if you go through examples, README and docs, you exactly do not need prior experience in IoT, at least to get started - Docs - https://docs.hololinked.dev/ Examples Recent - https://gitlab.com/hololinked/examples/servers/simulations Examples real world (Slightly outdated) - https://github.com/hololinked-dev/examples LLMs are yet to pick up my repo for training, so you will not have good luck there. Actively looking for feedback and contributors. Comparison The project transcends limitations of protocols or serializations (a general point of disagreement in different communities) and abstracts interactions with hardware above it. NOTE - Its not my idea, its being researched in academia for over a decade now. For those that understand, I have to tried to implement a hexagonal architecture to let the codebase evolve with newer technologies, although its somewhat inaccurate in the current state and needs improvement. But in a general sense, it remains extensible. I am not an expert in architecture, but I have tried my best. Developer info: broad testing suite already available, from unit to E2E Some notes are available here - https://docs.hololinked.dev/design/descriptors/ Issues - https://github.com/hololinked-dev/hololinked/issues There is also a scarcely populated Discord group if you are using the runtime and would like to discuss (info in readme) I have decided to try out supporting MCP, but I dont know yet how it will go, looking for backend developer familiar with both general web and agentic systems to contribute - https://github.com/hololinked-dev/hololinked/issues/159 Thanks for reading. submitted by /u/Positive-Thing6850 [link] [comments]
- Ty setup for pyright mimicby /u/ResponsibleIssue8983 (Python) on January 20, 2026 at 5:41 pm
Hi all, 🙌 For company restriction rules I cannot install pyright for typecheking, but I can install ty (from Astral). Opening it on the terminal with watch option is a great alternative, but I prefer to have a strict type checking which seems not to be the default for ty. 🍻 Do you a similar config how to achieve that it provides closely similar messages as pyright in strict mode? ❓❓ Many thanks for the help! 🫶 submitted by /u/ResponsibleIssue8983 [link] [comments]
- CondaNest: A native GTK4 GUI to manage and clean Conda environmentsby /u/eli_arad (Python) on January 20, 2026 at 4:57 pm
Source Code: https://github.com/aradar46/condanest Demo: CondaNest Demo What My Project Does CondaNest is a lightweight, native Linux GUI application designed to manage Conda and Mamba environments. I built this using Python and PyGObject (GTK4/Libadwaita) to provide a modern "Settings" style interface for the Conda ecosystem. Unlike the standard CLI, CondaNest visualizes the "physical" footprint of your environments. It interfaces with the conda or mamba binary via subprocess (using JSON output) to: Visualize Disk Usage: Instantly see how much space each environment occupies (calculated via du). Clean Up: Identify "stale" environments and a one-click cleaner for the package cache/tarballs. Manage Packages: Inspect installed packages with filters to distinguish between user-installed and dependencies. Handle Channels: Visual interface to toggle "Strict" vs "Flexible" channel priority to prevent solver freezes. Target Audience This tool is for Python Developers and Data Scientists on Linux who use Conda, Miniconda, or Miniforge. It is meant for production/daily use, specifically for: Users who have lost track of old environments and need to reclaim disk space. Developers who want a native GTK interface rather than a web-based one. Beginners who struggle with CLI commands for cloning or renaming environments. Comparison The main alternatives are Anaconda Navigator and the Conda CLI. Vs Anaconda Navigator: Navigator is a large, heavy application (often using Qt/WebEngine) that consumes significant memory and startup time. CondaNest is a native GTK4 app that launches instantly and respects the system theme (Dark Mode), making it much lighter (~50MB RAM vs Navigator's hundreds). Vs CLI: While the CLI is powerful, it is "blind" regarding disk usage. You cannot easily see which environment is bloating your drive without running separate shell commands. CondaNest wraps the CLI tools in a visual layer focused on maintenance and hygiene. submitted by /u/eli_arad [link] [comments]
- plissken - Documentation generator for Rust/Python hybrid projectsby /u/Fit-Presentation-591 (Python) on January 20, 2026 at 3:51 pm
What My Project Does I've got a few PyO3/Maturin projects and got frustrated that my Rust internals and Python API docs lived in completely separate worlds; making documentation manual and a general maintenance burden. So I built plissken. Point it at a project with Rust and Python code, and it parses both, extracts the docstrings, and renders unified documentation with cross-references between the two languages. Including taking pyo3 bindings and presenting it as the python api for documentation. It outputs to either MkDocs Material or mdBook, so it fits into existing workflows. (Should be trivial to add other static site generators if there’s a wish for them) cargo install plissken plissken render . -o docs -t mkdocs-material Target Audience : developers writing rust backed python libraries. Comparison : Think of sphinx autodoc, just not RST and not for raw python doc strings. GitHub: https://github.com/colliery-io/plissken I hope it's useful to someone else working on hybrid projects. submitted by /u/Fit-Presentation-591 [link] [comments]
- Network monitoring dashboard built with Flask, scapy, and nmapby /u/christiantorchia (Python) on January 20, 2026 at 3:49 pm
built a home network monitor as a learning project useful to anyone. - what it does: monitors local network in real time, tracks devices, bandwidth usage per device, and detects anomalies like new unknown devices or suspicious traffic patterns. - target audience: educational/homelab project, not production ready. built for learning networking fundamentals and packet analysis. runs on any linux machine, good for raspberry pi setups. - comparison: most alternatives are either commercial closed source like fing or heavyweight enterprise tools like ntopng. this is intentionally simple and focused on learning. everything runs locally, no cloud, full control. anomaly detection is basic rule based so you can actually understand what triggers alerts, not black box ml. tech stack used: flask for web backend + api scapy for packet sniffing / bandwidth monitoring python-nmap for device discovery sqlite for data persistence chart.js for visualization it was a good way to learn about networking protocols, concurrent packet processing, and building a full stack monitoring application from scratch. code + screenshots: https://github.com/torchiachristian/HomeNetMonitor feedback welcome, especially on the packet sniffing implementation and anomaly detection logic submitted by /u/christiantorchia [link] [comments]
- I built a local-first file metadata extraction library with a CLI (Python + Pydantic + Typer)by /u/AverageMechUser (Python) on January 20, 2026 at 3:25 pm
Hi all, I've been working on a project called Dorsal for the last 18 months. It's a way to make unstructured data more queryable and organized, without having to upload files to a cloud bucket or pay for remote compute (my CPU/GPU can almost always handle my workloads). What my Project Does Dorsal is a Python library and CLI for generating, validating and managing structured file metadata. It scans files locally to generate validated JSON-serializable records. I personally use it for deduplicating files, adding annotations (structured metadata records) and organizing files by tags. Core Extraction: Out of the box, it extracts "universal" metadata (Name, Hashes, Media Type; things any file has), as well and format-specific values (e.g., document page counts, video resolution, ebook titles/authors). The Toolkit: It provides the scaffolding to build and plug in your own complex extraction models (like OCR, classification, or entity extraction, where the input is a file). It handles the pipeline execution, dependency management, and file I/O for you. Strict Validation: It enforces Pydantic/JSON Schema on all outputs. If your custom extractor returns a float where a string is expected, Dorsal catches it before it pollutes your index. Example: a simple custom model for checking PDF files for sensitive words: from dorsal import AnnotationModel from dorsal.file.helpers import build_classification_record from dorsal.file.preprocessing import extract_pdf_text SENSITIVE_LABELS = { "Confidential": ["confidential", "do not distribute", "private"], "Internal": ["internal use only", "proprietary"], } class SensitiveDocumentScanner(AnnotationModel): id: str = "github:dorsalhub/annotation-model-examples" version: str = "1.0.0" def main(self) -> dict | None: try: pages = extract_pdf_text(self.file_path) except Exception as err: self.set_error(f"Failed to parse PDF: {err}") return None matches = set() for text in pages: text = text.lower() for label, keywords in SENSITIVE_LABELS.items(): if any(k in text for k in keywords): matches.add(label) return build_classification_record( labels=list(matches), vocabulary=list(SENSITIVE_LABELS.keys()) ) ^ This can be easily integrated into a locally-run linear pipeline, and executed via either the command line (by pointing at a file or directory) or in a python script. Target Audience ML Engineers / Data Scientists: Dorsal lets you make sure all of your output steps are validated, using a set of robust schemas for many common data engineering tasks (regression, entity extraction, classification etc.). Data Hoarders / Archivists: People with massive local datasets (TB+) who like customizable tools for deduplication, tagging and even cloud querying RAG Pipeline Builders: Turn folders of PDFs and docs into structured JSON chunks for vector embeddings Links Github: https://github.com/dorsalhub/dorsal PyPI: pip install dorsalhub Docs: https://docs.dorsalhub.com Comparison Feature Dorsal Cloud ETL (AWS/GCP) Integrity Hash-based Upload required Validation JSON Schema / Pydantic API Dependent Cost Free (Local Compute) $$$ (Per Page) Workflow Standardized Pipeline Vendor Lock-in Any and all feedback is extremely welcome! submitted by /u/AverageMechUser [link] [comments]
- Who should I interview about the state of Python in 2026?by /u/Cool_Technician_6380 (Python) on January 20, 2026 at 2:43 pm
Hey everyone! Quick question: who would you recommend as a great guest for a Python interview? Context: I'm working on a YouTube video exploring where Python stands in 2025/2026. Looking for someone who can speak to: * where Python is actually being used today across different industries * real-world adoption and career opportunities * how it stacks up against other modern languages (Rust, Go, etc.) * both technical depth and practical insights Ideally someone active in the community (YouTube, conferences or open source) and engaging to listen to. Huge thanks for any suggestions! submitted by /u/Cool_Technician_6380 [link] [comments]
- I built a Python DSL for creating C4 models and diagramsby /u/scribe-kiddie (Python) on January 20, 2026 at 10:39 am
Hello! Last year, I started writing a Python C4 model authoring tool, and it has come to a point where I feel good enough to share it with you guys so you can start playing around with it locally and render the C4 model views with PlantUML. GitHub repo: https://github.com/amirulmenjeni/buildzr Documentation here: https://buildzr.dev What My Project Does buildzr is a Structurizr authoring tool for Python programmers. It allows you to declaratively or procedurally author Structurizr models and diagrams. If you're not familiar with Structurizr, it is both an open standard (see Structurizr JSON schema) and a set of tools for building software architecture diagrams as code. Structurizr derives its architecture modeling paradigm based on the C4 model, the modeling language for describing software architectures and their relationships. In Structurizr, you define architecture models (System Context, Container, Component, and Code) and their relationships first. And then, you can re-use the models to present multiple perspectives, views, and stories about your architecture. buildzr supercharges this workflow with Pythonic syntax sugar and intuitive APIs that make modeling as code more fun and productive. Target Audience Use buildzr if you want to have an intuitive and powerful tool for writing C4 architecture models: Intuitive Pythonic Syntax: Use Python's context managers (with statements) to create nested structures that naturally mirror your architecture's hierarchy. See the example. Programmatic Creation: Use buildzr's DSL APIs to programmatically create C4 model architecture diagrams. Great for automation! Advanced Styling: Style elements beyond just tags --- target by direct reference, type, group membership, or custom predicates for fine-grained visual control. Just take a look at Styles! Cloud Provider Themes: Add AWS, Azure, Google Cloud, Kubernetes, and Oracle Cloud icons to your diagrams with IDE-discoverable constants. No more memorizing tag strings! See Themes. Standards Compliant: Stays true to the Structurizr JSON schema standards. buildzr uses datamodel-code-generator to automatically generate the low-level representation of the Workspace model. Rich Toolchain: Uses the familiar Python programming language and its rich toolchains to write software architecture models and diagrams! Quick example, so you can get the idea (more examples and explanations at https://buildzr.dev): from buildzr.dsl import ( Workspace, SoftwareSystem, Person, Container, SystemContextView, ContainerView, desc, Group, StyleElements, ) from buildzr.themes import AWS with Workspace('w') as w: # Define your models (architecture elements and their relationships). with Group("My Company") as my_company: u = Person('Web Application User') webapp = SoftwareSystem('Corporate Web App') with webapp: database = Container('database') api = Container('api') api >> ("Reads and writes data from/to", "http/api") >> database with Group("Microsoft") as microsoft: email_system = SoftwareSystem('Microsoft 365') u >> [ desc("Reads and writes email using") >> email_system, desc("Create work order using") >> webapp, ] webapp >> "sends notification using" >> email_system # Define the views. SystemContextView( software_system_selector=webapp, key='web_app_system_context_00', description="Web App System Context", auto_layout='lr', ) ContainerView( software_system_selector=webapp, key='web_app_container_view_00', auto_layout='lr', description="Web App Container View", ) # Stylize the views, and apply AWS theme icons. StyleElements(on=[u], **AWS.USER) StyleElements(on=[api], **AWS.LAMBDA) StyleElements(on=[database], **AWS.RDS) # Export to JSON, PlantUML, or SVG. w.save() # JSON to {workspace_name}.json # Requires `pip install buildzr[export-plantuml]` w.save(format='plantuml', path='output/') # PlantUML files w.save(format='svg', path='output/') # SVG files Comparison Surprisingly there's not a lot of Python authoring tool for Structurizr from the community -- which is what prompted me to start this project in the first place. I can find only two others, and they're also listed in Community tooling page of Structurizr's documentation. One of them is marked as archived: structurizr-python (archived) pystructurizr (since it output Structurizr DSL, not JSON schema, this may be outdated or not compatible with rendering tools that accepts Structurizr JSON schema) submitted by /u/scribe-kiddie [link] [comments]
- fastjsondiff - High-performance JSON comparison with a Zig-powered coreby /u/adilkhash (Python) on January 20, 2026 at 10:30 am
Hey reddit! I built a JSON diff library that uses Zig under the hood for speed. Zero runtime dependencies. What My Project Does fastjsondiff is a Python library for comparing JSON payloads. It detects added, removed, and changed values with full path reporting. The core comparison engine is written in Zig for maximum performance while providing a clean Pythonic API. Target Audience Developers who need to compare JSON data in performance-sensitive applications: API response validation, configuration drift detection, test assertions, data pipeline monitoring. Production-ready. Comparison fastjsondiff trades some flexibility for raw speed. If you need advanced features like custom comparators or fuzzy matching, deepdiff is better suited. If you need fast, straightforward diffs with zero dependencies, this is for you. Compare to the existing jsondiff the fastjsondiff package is blazingly faster. Code Example import fastjsondiff result = fastjsondiff.compare( '{"name": "Alice", "age": 30}', '{"name": "Bob", "age": 30, "city": "NYC"}' ) for diff in result: print(f"{diff.type.value}: {diff.path}") # changed: root.name # added: root.city # Filter by type, serialize to JSON, get summary stats added_only = result.filter(fastjsondiff.DiffType.ADDED) print(result.to_json(indent=2)) Link to Source Code Open Source, MIT License. GitHub repo: https://github.com/adilkhash/fastjsondiff PyPI: pip install fastjsondiff-zig Feedback is welcome! Hope this package will be a good fit for your problem. submitted by /u/adilkhash [link] [comments]























96DRHDRA9J7GTN6