How do you make a Python loop faster?

How do you make a Python loop faster?
DjamgaMind

DjamgaMind: Audio Intelligence for the C-Suite (Energy, Healthcare, Finance)

Are you drowning in dense legal text? DjamgaMind is the new audio intelligence platform that turns 100-page healthcare or Energy mandates into 5-minute executive briefings. Whether you are navigating Bill C-27 (Canada) or the CMS-0057-F Interoperability Rule (USA), our AI agents decode the liability so you don’t have to. 👉 Start your specialized audio briefing today at Djamgamind.com


AI Jobs and Career

I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.

Job TitleStatusPay
Full-Stack Engineer Strong match, Full-time $150K - $220K / year
Developer Experience and Productivity Engineer Pre-qualified, Full-time $160K - $300K / year
Software Engineer - Tooling & AI Workflows (Contract) Contract $90 / hour
DevOps Engineer (India) Full-time $20K - $50K / year
Senior Full-Stack Engineer Full-time $2.8K - $4K / week
Enterprise IT & Cloud Domain Expert - India Contract $20 - $30 / hour
Senior Software Engineer Contract $100 - $200 / hour
Senior Software Engineer Pre-qualified, Full-time $150K - $300K / year
Senior Full-Stack Engineer: Latin America Full-time $1.6K - $2.1K / week
Software Engineering Expert Contract $50 - $150 / hour
Generalist Video Annotators Contract $45 / hour
Generalist Writing Expert Contract $45 / hour
Editors, Fact Checkers, & Data Quality Reviewers Contract $50 - $60 / hour
Multilingual Expert Contract $54 / hour
Mathematics Expert (PhD) Contract $60 - $80 / hour
Software Engineer - India Contract $20 - $45 / hour
Physics Expert (PhD) Contract $60 - $80 / hour
Finance Expert Contract $150 / hour
Designers Contract $50 - $70 / hour
Chemistry Expert (PhD) Contract $60 - $80 / hour

How do you make a Python loop faster?

Programmers are always looking for ways to make their code more efficient. One way to do this is to use a faster loop. Python is a high-level programming language that is widely used by developers and software engineers. It is known for its readability and ease of use. However, one downside of Python is that its loops can be slow. This can be a problem when you need to process large amounts of data. There are several ways to make Python loops faster. One way is to use a faster looping construct, such as C. Another way is to use an optimized library, such as NumPy. Finally, you can vectorize your code, which means converting it into a format that can be run on a GPU or other parallel computing platform. By using these techniques, you can significantly speed up your Python code.

According to Vladislav Zorov, If not talking about NumPy or something, try to use list comprehension expressions where possible. Those are handled by the C code of the Python interpreter, instead of looping in Python. Basically same idea like the NumPy solution, you just don’t want code running in Python.

Example: (Python 3.0)

lst = [n for n in range(1000000)]
def loops():
    newlst = []
    for n in lst:
        newlst.append(n * 2)
    return newlst
def lstcomp():
    return [n * 2 for n in lst]
from timeit import timeit
print(timeit(loops, number=100))
#18.953254899999592 seconds
print(timeit(lstcomp, number=100))
#11.669047399991541 seconds
Or Do this in Python 2.0
How do you make a Python loop faster?
How do you make a Python loop faster?

Python list traversing tip:

Instead of this: for i in range(len(l)): x = l[i]

Use this for i, x in enumerate(l): …

TO keep track of indices and values inside a loop.

Twice faster, and the code looks better.

Another option is to write loops in C instead of Python. This can be done by using a Python extension module such as pyximport. By doing this, programmers can take advantage of the speed of C while still using the convenient syntax of Python.

Finally, developers can also improve the performance of their code by making use of caching. By caching values that are computed inside a loop, programmers can avoid having to recalculate them each time through the loop. By taking these steps, programmers can make their Python code more efficient and faster.

Very Important: Don’t worry about code efficiency until you find yourself needing to worry about code efficiency.

The place where you think about efficiency is within the logic of your implementations.

This is where “big O” discussions come in to play. If you aren’t familiar, here is a link on the topic

What are the top 10 Wonders of computing and software engineering?

How do you make a Python loop faster?
What are the top 10 most insane myths about computer programmers?

Programming, Coding and Algorithms Questions and Answers

Do you want to learn python we found 5 online coding courses for beginners?

Python Coding Bestsellers on Amazon

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.

Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:

Find Your AI Dream Job on Mercor

Your next big opportunity in AI could be just a click away!

https://amzn.to/3s3KXc3

https://coma2.ca

The Best Python Coding and Programming Bootcamps

We’ve also included a scholarship resource with more than 40 unique scholarships to provide additional financial support.

Python Coding Bootcamp Scholarships

Python Coding Breaking News

  • Please recommend a front-end framework/package
    by /u/inspectorG4dget (Python) on January 15, 2026 at 11:10 pm

    I'm building an app with streamlit. Why streamlit? Because I have no frontend experience and streamlit helped me get off the ground pretty quickly. Also, I'm simultaneously deploying to web and desktop, and streamlit lets me do this with just the one codebase (I intend to use something like PyInstaller for distribution) I have different "expanders" in my streamlit application. Each expander has some data/input elements in it (in the case of my most recent problem, it's a data_editor). Sometimes, I need one element to update in response to the user clicking on "Save Changes" in a different part of the application. If they were both in the same fragment, I could just do st.rerun(scope='fragment'). But since they're not, I have no other choice but to do st.rerun(). But if there's incorrect input, I write an error message, which gets subsequently erased due to the rerun. Now I know that I can store this stuff in st.session_state and add additional logic to "recreate" the (prior) error-message state of the app, but that adds a lot of complexity. Since there is no way to st.rerun() a different fragment than the one I'm in, it looks like I have to give up streamlit - about time, I've been writing workarounds/hacks for a lot of streamlit stumbling blocks. So, would anyone be able to recommend an alternative to streamlit? These are the criteria to determine viability of an alternative: ability to control the layout of my elements and programmatically refresh specific elements on demand web and desktop deployments from the same codebase bonus points for being able to handle mobile deployments as well Python API - I can learn another language if the learning curve is fast. That takes Node/React out of the realm of possibility somewhat mature - I started using streamlit back in v0.35 or so. But now I'm using v1.52. While streamlit hasn't been around for as long as React, v1.52 is sufficiently mature. I doubt a flashy new frontend framework (eg: with current version 0.43) would have had enough time to iron out the bugs if it's only been around for a very short period of time (eg: 6 months). ideally something you have experience with and can therefore speak confidently to its stability/reliability I'm currently considering: 1. flet: hasn't been around for very long - anyone know if it's any good? 1. NiceGUI 1. Reflex If anyone has any thoughts or suggestions, I'd love them Thank you submitted by /u/inspectorG4dget [link] [comments]

  • [For Hire] Python Bug Fixes – Crashes, Infinite Loops, Errors (Available Today)
    by /u/Dazzling_Leather3897 (Python) on January 15, 2026 at 9:26 pm

    I fix Python bugs and small scripts. Common issues I handle: - Scripts crashing with errors - Infinite loops - Input validation problems - File handling errors Available today. Payment via PayPal, Cash App, or Venmo. Message me with: 1) Python version 2) Error message or screenshot 3) What the script should do I’ll confirm price once I see it. submitted by /u/Dazzling_Leather3897 [link] [comments]

  • ChatGPT vs. Python for a Web-Scraping (and Beyond) Task
    by /u/Leo11235 (Python) on January 15, 2026 at 7:05 pm

    I work for a small city planning firm, who uses a ChatGPT Plus subscription to assist us in tracking new requests for proposals (RFPs) from a multitude of sources. Since we are a city planning firm, these sources are various federal, state, and local government sources, along with pertinent nonprofits and bid aggregator sites. We use the tool to scan set websites, that we have given it daily for updates if new RFPs pertinent to us (i.e., that include or fit into a set of keywords we have given the chats, and have saved to the chat memory) have surfaced for the sources in each chat. ChatGPT, despite frequent updates and tweaking of prompts on our end, is less than ideal for this task. Our "daily checks" done through ChatGPT consistently miss released RFPs, including those that should be within the parameters we have set for each of the chats we use for this task. To work around these issues, we have split the sources we ask it to check, so that each chat has 25 sources assigned to it in order for ChatGPT to avoid cutting corners (when we've given it larger datasets, despite asking it not to, it often does not run the full source check and print a table showing the results of each source check), and indicate in our instructions that the tracker should also attempt to search for related webpages and documents matching our description in addition to the source. Additionally, every month or so we delete the chats, and re-paste the same original instructions to new chats and remake the related automations to avoid the chats' long memories obstructing ChatGPT from completing the task well/taking too long. The problems we've encountered are as follows: We have automated the task (or attempted to do so) for ten of our chats, and results are very mixed. Often, the tracker returns the results, unprompted, at 11:30 am for the chats that are automated. Frequently, however, the tracker states that it's impossible to run the task without manually prompting a response (despite it, at other times and/or in other chats, returning what we ask for as an automated task). Additionally, in these automated commands, they often miss released RFPs even when run successfully. From what I can gather, this is because the automation, despite one of its instructions being to search the web more broadly, limits itself to checking one particular link, and sometimes the agencies in question do not have a dedicated RFP release page on their website so we have used the site homepage as the link. As automation is only permitted for up to 10 chats/tasks with our Plus subscription, we do a manual prompt (e.g., "run the rfp tracker for [DATE]") daily for the other chats. Still, we are seeing similar issues where the tracker does not follow the "if no links, try to search for the RFPs released by these agencies" prompt included in its saved memory. Additionally (and again, this applies to all the chats automated and manually-prompted alike) many sources block ChatGPT from accessing content--would this be an issue Python could overcome? See my question at the end. From the issues above, ChatGPT is often acting directly against what we have (repeatedly) saved to its memory (such as regarding searching elsewhere if a particular link doesn't have RFP listings). This is of particular importance for smaller cities, who sometimes post their RFPs on different pieces of their municipal websites, or whose "source page" we have given ChatGPT is a static document or a web page that is no longer updated. The point of using ChatGPT rather than manual checks for this is we were hoping that ChatGPT would be able to "go the extra mile" and search the web more generally for RFP updates from the particular agencies, but whether in the automated trackers or when manually prompted it's pretty bad at this. How would you go about correcting these issues in ChatGPT's prompt? We are wondering if Python would be a better tool, given that much of what we'd like to do is essentially web scraping. My one qualm is that one of the big shortcomings of ChatGPT thus far has been if we give it a link that either no longer works, is no longer updated, or is a link to a website's homepage, ChatGPT isn't following our prompts to search for RFPs from that source on the web more generally and (per my limited coding knowledge) Python won't be of much help there either. I would appreciate some insightful guidance on this, thank you! submitted by /u/Leo11235 [link] [comments]

  • Stale Code and what to do about it
    by /u/Natural-Sentence-601 (Python) on January 15, 2026 at 4:56 pm

    I sometimes wonder if the Python coding community is effectively a guild that one needs to earn your way into by hard knocks. Why do I need an AI to tell me about "stale code" and what to do about it?| Delete PyCache This is critical to solving the "Ghost" attribute error permanently. Please run this command in your backend directory terminal: Bash # Windows del /S /Q __pycache__ rmdir /S /Q __pycache__ # OR simply manually delete the __pycache__ folder in your backend directory. submitted by /u/Natural-Sentence-601 [link] [comments]

  • [Showcase] ReFlow - Open-Source Local AI Pipeline for Video Dubbing (Python/CustomTkinter)
    by /u/MeanManagement834 (Python) on January 15, 2026 at 4:30 pm

    Hi r/Python, I recently released v0.3 of my open-source project, ReFlow. It is a desktop GUI that orchestrates local AI models to handle video translation and content filtering. Repo: https://github.com/ananta-sj/ReFlow-Studio 📽️ What My Project Does ReFlow processes video files (MP4) locally using a pipeline of PyTorch models: 1. ASR: Uses OpenAI Whisper to transcribe audio and generate timestamps. 2. TTS: Uses Coqui XTTS v2 to translate text and generate dubbed audio in a target language while preserving the original speaker's tone. 3. CV: Uses NudeNet for object detection to identify and blur specific visual classes frame-by-frame. 4. GUI: Wraps these backend scripts in a multi-threaded CustomTkinter interface with real-time logging. 🎯 Target Audience This project is for developers and privacy enthusiasts who want to run these workflows offline without relying on cloud APIs. It serves as a practical example of integrating heavy machine-learning models into a user-friendly Python application. ⚖️ Comparison vs. Cloud APIs: Unlike cloud-based solutions which require data upload and API keys, ReFlow runs entirely on the user's hardware (GPU recommended). This ensures zero data latency and complete privacy, though performance depends on local hardware specs. vs. CLI Scripts: Many local implementations of XTTS or Whisper are command-line only. This project provides a full GUI (CustomTkinter) to make the pipeline accessible for testing and daily use. 🛠️ Tech Stack Language: Python 3.10 GUI: CustomTkinter Libraries: torch, ffmpeg-python, better_profanity Models: Whisper (Base/Small), XTTS v2, NudeNet I welcome any feedback on the code structure or the UI implementation! submitted by /u/MeanManagement834 [link] [comments]

  • Follow up: Clientele - an API integration framework for Python
    by /u/phalt_ (Python) on January 15, 2026 at 4:25 pm

    Hello pythonistas, two weeks ago I shared a blog post about an alternative way of building API integrations, heavily inspired by the developer experience of python API frameworks. What My Project Does Clientele lets you focus on the behaviour you want from an API, and let it handle the rest - networking, hydration, caching, and data validation. It uses strong types and decorators to build a reliable and loveable API integration experience. I have been working on the project day and night - testing, honing, extending, and even getting contributions from other helpful developers. I now have the project in a stable state where I need more feedback on real-life usage and testing. Here are some examples of it in action: Simple API ```python from clientele import api client = api.APIClient(base_url="https://pokeapi.co/api/v2") @client.get("/pokemon/{pokemon_name}") def get_pokemon_info(pokemon_name: str, result: dict) -> dict: return result ``` Simple POST request ```python from clientele import api client = api.APIClient(base_url="https://httpbin.org") @client.post("/post") def post_input_data(data: dict, result: dict) -> dict: return result ``` Streaming responses ```python from typing import AsyncIterator from pydantic import BaseModel from clientele import api client = api.APIClient(base_url="http://localhost:8000") class Event(BaseModel): text: str @client.get("/events", streaming_response=True) async def stream_events(*, result: AsyncIterator[Event]) -> AsyncIterator[Event]: return result ``` New features include: Handle streaming responses for Server Sent Events Handle custom response parsing with callbacks Sensible HTTP caching decorator with extendable backends A Mypy plugin to handle the way the library injects parameters Many many tweaks and updates to handle edge-case OpenAPI schemas Please star ⭐ the project, give it a download and let me know what you think: https://github.com/phalt/clientele submitted by /u/phalt_ [link] [comments]

  • CVE-2024-12718 Python Tarfile module how to mitigate on 3.14.2
    by /u/Trif55 (Python) on January 15, 2026 at 1:23 pm

    Hi this CVE shows as a CVSS score of 10 on MS defender which has reached the top of management level, I can't find any details if 3.14.2 is patched against this or needs a manual patch and if so how I install a manual patch, Most detections on defender are on windows PCs where Python is probably installed for light dev work or arduino things, I don't think anyone's has ever grabbed a tarfile and extracted it, though I expect some update or similar scripts perhaps do automatically? Anyway I installed python with the following per a guide: winget install 9NQ7512CXL7T py install py -3.14-64 cd c:\python\ py -3.14 -m venv .venv etc submitted by /u/Trif55 [link] [comments]

  • Tired of catching N+1 queries in production?
    by /u/Ok-Emphasis-3825 (Python) on January 15, 2026 at 1:12 pm

    Hi everyone, Ever pushed a feature, only to watch your database scream because a missed select_related or prefetch_related caused N+1 queries? Runtime tools like nplusone and Django Debug Toolbar are great, but they catch issues after the fact. I wanted something that flags problems before they hit staging or production. I’m exploring a CLI tool that performs static analysis on Django projects to detect N+1 patterns, even across templates. Early features include: Detect N+1 queries in Python code before you run it Analyze templates to find database queries triggered by loops or object access Works in CI/CD: block PRs that introduce performance issues Runs without affecting your app at runtime Quick CLI output highlights exactly which queries and lines may cause N+1s I am opening a private beta to get feedback from Django developers and understand which cases are most common in the wild. If you are interested, check out a short landing page with examples: http://django-n-1-query-detector.pages.dev/ I would love to hear from fellow Django devs: Any recent N+1 headaches you had to debug? What happened? How do you currently catch these issues in your workflow? Would a tool that warns you before deployment be useful for your team? Stories welcome. The more painful, the better! Thanks for reading! submitted by /u/Ok-Emphasis-3825 [link] [comments]

  • Modularity in bigger applications
    by /u/omry8880 (Python) on January 15, 2026 at 12:23 pm

    I would love to know how you guys like to structure your models/services files: Do you usually create a single models.py/service.py file and implement all the router's (in case of a FastAPI project) models/services there, or is it better to have a file-per-model approach, meaning have a models folder and inside it many separate model files? For a big FastAPI project for example, it makes sense to have a models.py file inside each router folder, but I wonder if having a 400+ lines models.py file is a good practice or not. submitted by /u/omry8880 [link] [comments]

  • Any suggestions for Python development classes in Thane?
    by /u/PristinePlace3079 (Python) on January 15, 2026 at 9:08 am

    I’m planning to get serious about Python development, but while searching for python development classes in Thane, I’ve realized there are tons of options with very different approaches. It’s confusing to decide what’s actually worth investing time in, especially as a beginner. From my experience so far, Python itself makes sense quickly, but applying it to real projects and understanding how things work end-to-end is where most people struggle. I bounced between random videos and tutorials and often ended up more confused than confident. What helped others here was structured learning with clear explanations and real examples instead of jumping between topics. Some learners I spoke with mentioned that studying at Quastech IT Training & Placement Institute, Thane helped them connect fundamentals with actual development practice because basics were taught properly before moving ahead. I’m still figuring out the right pace and focus, but the path looks clearer now. For those who’ve learned Python development—did you benefit more from classes, project practice, or self-study in the beginning? submitted by /u/PristinePlace3079 [link] [comments]

  • What's your default Python project setup in 2026?
    by /u/crowpng (Python) on January 15, 2026 at 8:55 am

    When starting something new, do you default to: venv or poetry? requests vs httpx? pandas vs lighter tools? type checking or not? Not looking for best, just interested in real-world defaults people actually use. submitted by /u/crowpng [link] [comments]

  • We are organizing an event focused on hands-on discussions about using LangChain with PostHog.
    by /u/Upset-Pop1136 (Python) on January 15, 2026 at 8:45 am

    Topic: LangChain in Production, PostHog Max AI Code Walkthrough ​About Event This meeting will be a hands-on discussion where we will go through the actual code implementation of PostHog Max AI and understand how PostHog built it using LangChain. ​We will explore how LangChain works in real production, what components they used, how the workflow is designed, and what best practices we can learn from it. ​After the walkthrough, we will have an open Q&A, and then everyone can share their feedback and experience using LangChain in their own projects. ​This session is for Developers working with LangChain Engineers building AI agents for production. Anyone who wants to learn from a real LangChain production implementation. Registration Link: https://luma.com/5g9nzmxa A small effort in giving back to the community 🙂 submitted by /u/Upset-Pop1136 [link] [comments]

  • Handling 30M rows pandas/colab - Chunking vs Sampling vs Lossing Context?
    by /u/insidePassenger0 (Python) on January 15, 2026 at 7:45 am

    I’m working with a fairly large dataset (CSV) (~3 crore / 30 million rows). Due to memory and compute limits (I’m currently using Google Colab), I can’t load the entire dataset into memory at once. What I’ve done so far: Randomly sampled ~1 lakh (100k) rows Performed EDA on the sample to understand distributions, correlations, and basic patterns However, I’m concerned that sampling may lose important data context, especially: Outliers or rare events Long-tail behavior Rare categories that may not appear in the sample So I’m considering an alternative approach using pandas chunking: Read the data with chunksize=1_000_000 Define separate functions for: preprocessing EDA/statistics feature engineering Apply these functions to each chunk Store the processed chunks in a list Concatenate everything at the end into a final DataFrame My questions: Is this chunk-based approach actually safe and scalable for ~30M rows in pandas? Which types of preprocessing / feature engineering are not safe to do chunk-wise due to missing global context? If sampling can lose data context, what’s the recommended way to analyze and process such large datasets while still capturing outliers and rare patterns? Specifically for Google Colab, what are best practices here? -Multiple passes over data? -Storing intermediate results to disk (Parquet/CSV)? -Using Dask/Polars instead of pandas? I’m trying to balance: -Limited RAM -Correct statistical behavior -Practical workflows (not enterprise Spark clusters) Would love to hear how others handle large datasets like this in Colab or similar constrained environments submitted by /u/insidePassenger0 [link] [comments]

  • Thursday Daily Thread: Python Careers, Courses, and Furthering Education!
    by /u/AutoModerator (Python) on January 15, 2026 at 12:00 am

    Weekly Thread: Professional Use, Jobs, and Education 🏢 Welcome to this week's discussion on Python in the professional world! This is your spot to talk about job hunting, career growth, and educational resources in Python. Please note, this thread is not for recruitment. How it Works: Career Talk: Discuss using Python in your job, or the job market for Python roles. Education Q&A: Ask or answer questions about Python courses, certifications, and educational resources. Workplace Chat: Share your experiences, challenges, or success stories about using Python professionally. Guidelines: This thread is not for recruitment. For job postings, please see r/PythonJobs or the recruitment thread in the sidebar. Keep discussions relevant to Python in the professional and educational context. Example Topics: Career Paths: What kinds of roles are out there for Python developers? Certifications: Are Python certifications worth it? Course Recommendations: Any good advanced Python courses to recommend? Workplace Tools: What Python libraries are indispensable in your professional work? Interview Tips: What types of Python questions are commonly asked in interviews? Let's help each other grow in our careers and education. Happy discussing! 🌟 submitted by /u/AutoModerator [link] [comments]

  • I built wxpath: a declarative web crawler where crawling/scraping is one XPath expression
    by /u/fourhoarsemen (Python) on January 14, 2026 at 5:45 pm

    This is wxpath's first public release, and I'd love feedback on the expression syntax, any use cases this might unlock, or anything else. What My Project Does wxpath is a declarative web crawler where traversal is expressed directly in XPath. Instead of writing imperative crawl loops, wxpath lets you describe what to follow and what to extract in a single expression (it's async under the hood; results are streamed as they’re discovered). By introducing the url(...) operator and the /// syntax, wxpath's engine can perform deep/recursive web crawling and extraction. For example, to build a simple Wikipedia knowledge graph: import wxpath path_expr = """ url('https://en.wikipedia.org/wiki/Expression_language') ///url(//main//a/@href[starts-with(., '/wiki/') and not(contains(., ':'))]) /map{ 'title': (//span[contains(@class, "mw-page-title-main")]/text())[1] ! string(.), 'url': string(base-uri(.)), 'short_description': //div[contains(@class, 'shortdescription')]/text() ! string(.), 'forward_links': //div[@id="mw-content-text"]//a/@href ! string(.) } """ for item in wxpath.wxpath_async_blocking_iter(path_expr, max_depth=1): print(item) Output: map{'title': 'Computer language', 'url': 'https://en.wikipedia.org/wiki/Computer_language', 'short_description': 'Formal language for communicating with a computer', 'forward_links': ['/wiki/Formal_language', '/wiki/Communication', ...]} map{'title': 'Advanced Boolean Expression Language', 'url': 'https://en.wikipedia.org/wiki/Advanced_Boolean_Expression_Language', 'short_description': 'Hardware description language and software', 'forward_links': ['/wiki/File:ABEL_HDL_example_SN74162.png', '/wiki/Hardware_description_language', ...]} map{'title': 'Machine-readable medium and data', 'url': 'https://en.wikipedia.org/wiki/Machine_readable', 'short_description': 'Medium capable of storing data in a format readable by a machine', 'forward_links': ['/wiki/File:EAN-13-ISBN-13.svg', '/wiki/ISBN', ...]} ... Target Audience The target audience is anyone who: wants to quickly prototype and build web scrapers familiar with XPath or data selectors builds datasets (think RAG, data hoarding, etc.) wants to study link structure of the web (quickly) i.e. web network scientists Comparison From Scrapy's official documentation, here is an example of a simple spider that scrapes quotes from a website and writes to a file. Scrapy: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ "https://quotes.toscrape.com/tag/humor/", ] def parse(self, response): for quote in response.css("div.quote"): yield { "author": quote.xpath("span/small/text()").get(), "text": quote.css("span.text::text").get(), } next_page = response.css('li.next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) Then from the command line, you would run: scrapy runspider quotes_spider.py -o quotes.jsonl wxpath: wxpath gives you two options: write directly from a Python script or from the command line. from wxpath import wxpath_async_blocking_iter from wxpath.hooks import registry, builtin path_expr = """ url('https://quotes.toscrape.com/tag/humor/', follow=//li[@class='next']/a/@href) //div[@class='quote'] /map{ 'author': (./span/small/text())[1], 'text': (./span[@class='text']/text())[1] } registry.register(builtin.JSONLWriter(path='quotes.jsonl')) items = list(wxpath_async_blocking_iter(path_expr, max_depth=3)) or from the command line: wxpath --depth 1 "\ url('https://quotes.toscrape.com/tag/humor/', follow=//li[@class='next']/a/@href) \ //div[@class='quote'] \ /map{ \ 'author': (./span/small/text())[1], \ 'text': (./span[@class='text']/text())[1] \ }" > quotes.jsonl Links GitHub: https://github.com/rodricios/wxpath PyPI: pip install wxpath submitted by /u/fourhoarsemen [link] [comments]

  • Teaching services online for kids/teenagers?
    by /u/CodeVirus (Python) on January 14, 2026 at 2:33 pm

    My son (13) is interested in programming. I would like to sign him up for some introductory (and fun for teenagers) online program. Are there any that you’ve seen that you’d be able to recommend. Paid or unpaid are fine. submitted by /u/CodeVirus [link] [comments]

  • I’ve published a new audio DSP/Synthesis package to PyPI
    by /u/D0m1n1qu36ry5 (Python) on January 14, 2026 at 12:53 pm

    **What My Project Does** - It’s called audio-dsp. It is a comprehensive collection of DSP tools including Synthesizers, Effects, Sequencers, MIDI tools, and Utilities. **Target Audience** - I am a music producer (25 years) and programmer (15 years), so I built this with a focus on high-quality rendering and creative design. If you are a creative coder or audio dev looking to generate sound rather than just analyze it, this is for you. **Comparison** - Most Python audio libraries focus on analysis (like librosa) or pure math (scipy). My library is different because it focuses on musicality and synthesis. It provides the building blocks for creating music and complex sound textures programmatically. Try it out: pip install audio-dsp GitHub: https://github.com/Metallicode/python_audio_dsp I’d love to hear your feedback! submitted by /u/D0m1n1qu36ry5 [link] [comments]

  • I built a modern, type-safe rate limiter for Django with Async support (v1.0.1)
    by /u/TheCodingTutor (Python) on January 14, 2026 at 11:43 am

    Hey r/Python! 👋 I just released django-smart-ratelimit v1.0.1. I built this because I needed a rate limiter that could handle modern Django (Async views) and wouldn't crash my production apps when the cache backend flickered. What makes it different? 🐍 Full Async Support: Works natively with async views using AsyncRedis. 🛡️ Circuit Breakers: If your Redis backend has high latency or goes down, the library detects it and temporarily bypasses rate limiting so your user traffic isn't dropped. 🧠 Flexible Algorithms: You aren't stuck with just one method. Choose between Token Bucket (for burst traffic), Sliding Window, or Fixed Window. 🔌 Easy Migration: API compatible with the legacy django-ratelimit library. Quick Example: from django_smart_ratelimit import ratelimit @ratelimit(key='ip', rate='5/m', block=True) async def my_async_view(request): return HttpResponse("Fast & Safe! 🚀") I'd love to hear your feedback on the architecture or feature set! GitHub: https://github.com/YasserShkeir/django-smart-ratelimit submitted by /u/TheCodingTutor [link] [comments]

  • dc-input: I got tired of rewriting interactive input logic, so I built this
    by /u/Emotional-Pipe-335 (Python) on January 14, 2026 at 10:19 am

    Hi all! I wanted to share a small library I’ve been working on. Feedback is very welcome, especially on UX, edge cases or missing features. https://github.com/jdvanwijk/dc-input What my project does I often end up writing small scripts or internal tools that need structured user input, and I kept re-implementing variations of this: from dataclasses import dataclass @dataclass class User: name: str age: int | None while True: name = input("Name: ").strip() if name: break print("Name is required") while True: age_raw = input("Age (optional): ").strip() if not age_raw: age = None break try: age = int(age_raw) break except ValueError: print("Age must be an integer") user = User(name=name, age=age) This gets tedious (and brittle) once you add nesting, optional sections, repetition, undo-functionality, etc. So I built dc-input, which lets you do this instead: from dataclasses import dataclass from dc_input import get_input @dataclass class User: name: str age: int | None user = get_input(User) The library walks the dataclass schema and derives an interactive input session from it (nested dataclasses, optional fields, repeatable containers, defaults, undo support, etc.). For an interactive session example, see: https://asciinema.org/a/767996 Target Audience This has been mostly been useful for me in internal scripts and small tools where I want structured input without turning the whole thing into a CLI framework. Comparison Compared to prompt libraries like prompt_toolkit and questionary, dc-input is higher-level: you don’t design prompts or control flow by hand — the structure of your data is the control flow. This makes dc-input more opinionated and less flexible than those examples, so it won’t fit every workflow; but in return you get very fast setup, strong guarantees about correctness, and excellent support for traversing nested data-structures. submitted by /u/Emotional-Pipe-335 [link] [comments]

  • Jetbase - A Modern Python Database Migration Tool (Alembic alternative)
    by /u/Parking_Cicada_819 (Python) on January 14, 2026 at 12:04 am

    Hey everyone! I built a database migration tool in Python called Jetbase. I was looking for something more Liquibase / Flyway style than Alembic when working with more complex apps and data pipelines but didn’t want to leave the Python ecosystem. So I built Jetbase as a Python-native alternative. Since Alembic is the main database migration tool in Python, here’s a quick comparison: Jetbase has all the main stuff like upgrades, rollbacks, migration history, and dry runs, but also has a few other features that make it different. Migration validation Jetbase validates that previously applied migration files haven’t been modified or removed before running new ones to prevent different environments from ending up with different schemas If a migrated file is changed or deleted, Jetbase fails fast. If you want Alembic-style flexibility you can disable validation via the config SQL-first, not ORM-first Jetbase migrations are written in plain SQL. Alembic supports SQL too, but in practice it’s usually paired with SQLAlchemy. That didn’t match how we were actually working anymore since we switched to always use plain SQL: Complex queries were more efficient and clearer in raw SQL ORMs weren’t helpful for data pipelines (ex. S3 → Snowflake → Postgres) We explored and validated SQL queries directly in tools like DBeaver and Snowflake and didn’t want to rewrite it into SQLAlchemy for our apps Sometimes we queried other teams’ databases without wanting to add additional ORM models Linear, easy-to-follow migrations Jetbase enforces strictly ascending version numbers: 1 → 2 → 3 → 4 Each migration file includes the version in the filename: V1.5__create_users_table.sql This makes it easy to see the order at a glance rather than having random version strings. And jetbase has commands such as jetbase history and jetbase status to see applied versus pending migrations. Linear migrations also leads to handling merge conflicts differently than Alembic In Alembic’s graph-based approach, if 2 developers create a new migration linked to the same down revision, it creates 2 heads. Alembic has to solve this merge conflict (flexible but makes things more complicated) Jetbase keeps migrations fully linear and chronological. There’s always a single latest migration. If two migrations try to use the same version number, Jetbase fails immediately and forces you to resolve it before anything runs. The end result is a migration history that stays predictable, simple, and easy to reason about, especially when working on a team or running migrations in CI or automation. Migration Locking Jetbase has a lock to only allow one migration process to run at a time. It can be useful when you have multiple developers / agents / CI/CD processes running to stop potential migration errors or corruption. Repo: https://github.com/jetbase-hq/jetbase Docs: https://jetbase-hq.github.io/jetbase/ Would love to hear your thoughts / get some feedback! It’s simple to get started: pip install jetbase # Initalize jetbase jetbase init cd jetbase (Add your sqlalchemy_url to jetbase/env.py. Ex. sqlite:///test.db) # Generate new migration file: V1__create_users_table.sql: jetbase new “create users table” -v 1 # Add migration sql statements to file, then run the migration: jetbase upgrade submitted by /u/Parking_Cicada_819 [link] [comments]

What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.

Watch a video or find out more here.

Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.

Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.

Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.

Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.

Google Workspace Business Standard Promotion code for the Americas 63F733CLLY7R7MM 63F7D7CPD9XXUVT 63FLKQHWV3AEEE6 63JGLWWK36CP7WM
Email me for more promo codes

Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz

Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals

Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz

Skin Stem Cell Serum

Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel

Can AI Really Predict Lottery Results? We Asked an Expert.

Ace the 2025 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2025 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss human health

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, NCAA, F1, and other leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)