What are some ways to increase precision or recall in machine learning?

What are some ways to increase precision or recall in machine learning?

What are some ways to increase precision or recall in machine learning?

What are some ways to Boost Precision and Recall in Machine Learning?

Sensitivity vs Specificity?


In machine learning, recall is the ability of the model to find all relevant instances in the data while precision is the ability of the model to correctly identify only the relevant instances. A high recall means that most relevant results are returned while a high precision means that most of the returned results are relevant. Ideally, you want a model with both high recall and high precision but often there is a trade-off between the two. In this blog post, we will explore some ways to increase recall or precision in machine learning.

What are some ways to increase precision or recall in machine learning?
What are some ways to increase precision or recall in machine learning?


There are two main ways to increase recall:

by increasing the number of false positives or by decreasing the number of false negatives. To increase the number of false positives, you can lower your threshold for what constitutes a positive prediction. For example, if you are trying to predict whether or not an email is spam, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in more false positives (emails that are not actually spam being classified as spam) but will also increase recall (more actual spam emails being classified as spam).

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

To decrease the number of false negatives,

you can increase your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in fewer false negatives (actual spam emails not being classified as spam) but will also decrease recall (fewer actual spam emails being classified as spam).

What are some ways to increase precision or recall in machine learning?

There are two main ways to increase precision:

by increasing the number of true positives or by decreasing the number of true negatives. To increase the number of true positives, you can raise your threshold for what constitutes a positive prediction. For example, using the spam email prediction example again, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in more true positives (emails that are actually spam being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).

To decrease the number of true negatives,

you can lower your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example once more, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in fewer true negatives (emails that are not actually spam not being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).

What are some ways to increase precision or recall in machine learning?

To summarize,

there are a few ways to increase precision or recall in machine learning. One way is to use a different evaluation metric. For example, if you are trying to maximize precision, you can use the F1 score, which is a combination of precision and recall. Another way to increase precision or recall is to adjust the threshold for classification. This can be done by changing the decision boundary or by using a different algorithm altogether.

AI-Powered Professional Certification Quiz Platform
Crack Your Next Exam with Djamgatech AI Cert Master

Web|iOs|Android|Windows

🚀 Power Your Podcast Like AI Unraveled: Get 20% OFF Google Workspace!

Hey everyone, hope you're enjoying the deep dive on AI Unraveled. Putting these episodes together involves tons of research and organization, especially with complex AI topics.

A key part of my workflow relies heavily on Google Workspace. I use its integrated tools, especially Gemini Pro for brainstorming and NotebookLM for synthesizing research, to help craft some of the very episodes you love. It significantly streamlines the creation process!

Feeling inspired to launch your own podcast or creative project? I genuinely recommend checking out Google Workspace. Beyond the powerful AI and collaboration features I use, you get essentials like a professional email (you@yourbrand.com), cloud storage, video conferencing with Google Meet, and much more.

It's been invaluable for AI Unraveled, and it could be for you too.

Start Your Journey & Save 20%

Google Workspace makes it easy to get started. Try it free for 14 days, and as an AI Unraveled listener, get an exclusive 20% discount on your first year of the Business Standard or Business Plus plan!

Sign Up & Get Your Discount Here

Use one of these codes during checkout (Americas Region):

Business Standard Plan: 63P4G3ELRPADKQU

Business Standard Plan: 63F7D7CPD9XXUVT

Business Standard Plan: 63FLKQHWV3AEEE6

Business Standard Plan: 63JGLWWK36CP7W

Business Plus Plan: M9HNXHX3WC9H7YE

With Google Workspace, you get custom email @yourcompany, the ability to work from anywhere, and tools that easily scale up or down with your needs.

Need more codes or have questions? Email us at .

What are some ways to increase precision or recall in machine learning?

Sensitivity vs Specificity

In machine learning, sensitivity and specificity are two measures of the performance of a model. Sensitivity is the proportion of true positives that are correctly predicted by the model, while specificity is the proportion of true negatives that are correctly predicted by the model.

Google Colab For Machine Learning

State of the Google Colab for ML (October 2022)

Google introduced computing units, which you can purchase just like any other cloud computing unit you can from AWS or Azure etc. With Pro you get 100, and with Pro+ you get 500 computing units. GPU, TPU and option of High-RAM effects how much computing unit you use hourly. If you don’t have any computing units, you can’t use “Premium” tier gpus (A100, V100) and even P100 is non-viable.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Google Colab Pro+ comes with Premium tier GPU option, meanwhile in Pro if you have computing units you can randomly connect to P100 or T4. After you use all of your computing units, you can buy more or you can use T4 GPU for the half or most of the time (there can be a lot of times in the day that you can’t even use a T4 or any kinds of GPU). In free tier, offered gpus are most of the time K80 and P4, which performs similar to a 750ti (entry level gpu from 2014) with more VRAM.

For your consideration, T4 uses around 2, and A100 uses around 15 computing units hourly.
Based on the current knowledge, computing units costs for GPUs tend to fluctuate based on some unknown factor.

Considering those:

  1. For hobbyists and (under)graduate school duties, it will be better to use your own gpu if you have something with more than 4 gigs of VRAM and better than 750ti, or atleast purchase google pro to reach T4 even if you have no computing units remaining.
  2. For small research companies, and non-trivial research at universities, and probably for most of the people Colab now probably is not a good option.
  3. Colab Pro+ can be considered if you want Pro but you don’t sit in front of your computer, since it disconnects after 90 minutes of inactivity in your computer. But this can be overcomed with some scripts to some extend. So for most of the time Colab Pro+ is not a good option.

If you have anything more to say, please let me know so I can edit this post with them. Thanks!

Conclusion:


In machine learning, precision and recall trade off against each other; increasing one often decreases the other. There is no single silver bullet solution for increasing either precision or recall; it depends on your specific use case which one is more important and which methods will work best for boosting whichever metric you choose. In this blog post, we explored some methods for increasing either precision or recall; hopefully this gives you a starting point for improving your own models!

 

What are some ways we can use machine learning and artificial intelligence for algorithmic trading in the stock market?

Machine Learning and Data Science Breaking News 2022 – 2023

  • Now you're paying an analyst $50/hr to standardize date formats instead of doing actual analysis work.
    by /u/ElectrikMetriks (Data Science) on May 12, 2025 at 5:12 pm

    submitted by /u/ElectrikMetriks [link] [comments]

  • "Day Since Last X" feature preprocessing
    by /u/Ok-Needleworker-6122 (Data Science) on May 12, 2025 at 4:13 pm

    Hi Everyone! Bit of a technical modeling question here. Apologies if this is very basic preprocessing stuff but I'm a younger data scientist working in industry and I'm still learning. Say you have a pretty standard binary classification model predicting 1 = we should market to this customer and 0 = we should not market to this customer (the exact labeling scheme is a bit proprietary). I have a few features that are in the style "days since last touchpoint". For example "days since we last emailed this person" or "days since we last sold to this person". However, a solid percentage of the rows are NULL, meaning we have never emailed or sold to this person. Any thoughts on how should I handle NULLs for this type of column? I've been imputing with MAX(days since we last sold to this person) + 1 but I'm starting to think that could be confusing my model. I think the reality of the situation is that someone with 1 purchase a long time ago is a lot more likely to purchase today than someone who has never purchased anything at all. The person with 0 purchases may not even be interested in our product, while we have evidence that the person with 1 purchase a long time ago is at least a fit for our product. Imputing with MAX(days since we last sold to this person) + 1 poses these two cases as very similar to the model. For reference I'm testing with several tree-based models (light GBM and random forest) and comparing metrics to pick between the architecture options. So far I've been getting the best results with light GBM. One thing I'm thinking about is whether I should just leave the people who have never sold as NULLs and have my model pick the direction to split for missing values. (I believe this would work with LightGBM but not RandomForest). Another option is to break down the "days since last sale" feature into categories, maybe quantiles with a special category for NULLS, and then dummy encode. Has anyone else used these types of "days since last touchpoint" features in propensity modeling/marketing modeling? submitted by /u/Ok-Needleworker-6122 [link] [comments]

  • is it necessary to learn some language other than python?
    by /u/vniversvs_ (Data Science) on May 12, 2025 at 2:05 pm

    that's pretty much it. i'm proficient in python already, but was wondering if, to be a better DS, i'd need to learn something else, or is it better to focus on studying something else rather than a new language. edit: yes, SQL is obviously a must. i already know it. sorry for the overlook. submitted by /u/vniversvs_ [link] [comments]

  • Weekly Entering & Transitioning - Thread 12 May, 2025 - 19 May, 2025
    by /u/AutoModerator (Data Science) on May 12, 2025 at 4:01 am

    Welcome to this week's entering & transitioning thread! This thread is for any questions about getting started, studying, or transitioning into the data science field. Topics include: Learning resources (e.g. books, tutorials, videos) Traditional education (e.g. schools, degrees, electives) Alternative education (e.g. online courses, bootcamps) Job search questions (e.g. resumes, applying, career prospects) Elementary questions (e.g. where to start, what next) While you wait for answers from the community, check out the FAQ and Resources pages on our wiki. You can also search for answers in past weekly threads. submitted by /u/AutoModerator [link] [comments]

  • rixpress: an R package to set up multi-language reproducible analytics pipelines (2 Minute intro video)
    by /u/brodrigues_co (Data Science) on May 11, 2025 at 7:39 am

    submitted by /u/brodrigues_co [link] [comments]

  • Where Can I Find Legit Remote Data Science Jobs That Hire Globally?
    by /u/Aftabby (Data Science) on May 11, 2025 at 6:57 am

    Hey folks! I’m on the hunt for trustworthy remote job boards or sites that regularly post real data science and data analyst roles—and more importantly, are open to hiring from anywhere in the world. I’ve noticed sites like Indeed don’t support my country, and while LinkedIn has plenty of remote listings, many seem sketchy or not legit. So, what platforms or communities do you recommend for finding genuine remote gigs in this field that are truly global? Any tips on spotting legit postings would also be super helpful! Thanks in advance for sharing your experiences! submitted by /u/Aftabby [link] [comments]

  • New Python Package Feedback - Try in Google Collab
    by /u/MLEngDelivers (Data Science) on May 11, 2025 at 2:27 am

    I’ve been occasionally working on this in my spare time and would appreciate feedback. Try the package in Colab The idea for ‘framecheck’ is to catch bad data in a data frame before it flows downstream in very few lines of code. You’d also easily isolate the records with problematic data. This isn’t revolutionary or new - what I wanted was a way to do this in fewer lines of code than other packages like great expectations and pydantic. Really I just want honest feedback. If people don’t find it useful, I won’t put more time into it. pip install framecheck Repo with reproducible examples: https://github.com/OlivierNDO/framecheck submitted by /u/MLEngDelivers [link] [comments]

  • I am a staff data scientist at a big tech company -- AMA
    by /u/Federal_Bus_4543 (Data Science) on May 10, 2025 at 8:17 pm

    Why I’m doing this I am low on karma. Plus, it just feels good to help. About me I’m currently a staff data scientist at a big tech company in Silicon Valley. I’ve been in the field for about 10 years since earning my PhD in Statistics. I’ve worked at companies of various sizes — from seed-stage startups to pre-IPO unicorns to some of the largest tech companies. A few caveats Anything I share reflects my personal experience and may carry some bias. My experience is based in the US, particularly in Silicon Valley. I have some people management experience but have mostly worked as an IC Data science is a broad term. I’m most familiar with machine learning scientist, experimentation/causal inference, and data analyst roles. I may not be able to respond immediately, but I’ll aim to reply within 24 hours. Update: Wow, I didn’t expect this to get so much attention. I’m a bit overwhelmed by the number of comments and DMs, so I may not be able to reply to everyone. That said, I’ll do my best to respond to as many as I can over the next week. Really appreciate all the thoughtful questions and discussions! submitted by /u/Federal_Bus_4543 [link] [comments]

  • Does your company have a dedicated team/person for MLOps? If not, how do you manage MLOps?
    by /u/Illustrious-Pound266 (Data Science) on May 10, 2025 at 3:00 pm

    As someone in MLOps, I am curious to hear how other companies and teams manage the MLOps process and workflow. My company (because it's a huge enterprise) has multiple teams doing some type of MLOps or MLOps-adjacent projects. But I know that other companies do this very differently. So does your team have a separate dedicated person or a group for MLOps and managing model lifecycle in production? If not, how do you manage it? Is the data scientist / MLE expected to do all? submitted by /u/Illustrious-Pound266 [link] [comments]

  • How Can Early-Level Data Scientists Get Noticed by Recruiters and Industry Pros?
    by /u/Aftabby (Data Science) on May 10, 2025 at 6:45 am

    Hey everyone! I started my journey in the data science world almost a year ago, and I'm wondering: What’s the best way to market myself so that I actually get noticed by recruiters and industry professionals? How do you build that presence and get on the radar of the right people? Any tips on networking, personal branding, or strategies that worked for you would be amazing to hear! submitted by /u/Aftabby [link] [comments]

  • What are some useful DS/DE projects I can do during slow periods at work?
    by /u/Trick-Interaction396 (Data Science) on May 9, 2025 at 9:49 pm

    Things are super slow at work due to economic uncertainty. I'm used to being super busy so I never had to think up my own problems/projects. Any ideas for useful projects I can do or sell to management? Thanks. submitted by /u/Trick-Interaction396 [link] [comments]

  • I have an in-person interview with the CTO of a company in 2 weeks. I have no industry work experience for data science. Only project based experience. How f*cked am I?
    by /u/marblesandcookies (Data Science) on May 9, 2025 at 5:47 pm

    Help submitted by /u/marblesandcookies [link] [comments]

  • When everyone’s entitled but no one’s innocent — tips for catching creepy access rights, Please?
    by /u/Careful_Engineer_700 (Data Science) on May 9, 2025 at 12:00 pm

    Picture this: You’re working in a place where every employee, contractor, and intern is plugged into a dense access matrix. Rows are users, columns are entitlements — approvals, roles, flags, mysterious group memberships with names like FIN_OPS_CONFIDENTIAL. Nobody really remembers why half of these exist. But they do. And people have them. Somewhere in there, someone has access they probably shouldn’t. Maybe they used to need it. Maybe someone clicked "approve" in 2019 and forgot. Maybe it’s just... weird. We’ve been exploring how to spot these anomalies before they turn into front-page incidents. The data looks like this: user_id → [access_1, access_2, access_3, ..., access_n] values_in_the_matrix -> [0, 1, 0 , ..., 0 This means this user has access_2 Flat. Sparse. Messy. Inherited from groups and roles sometimes. Assigned directly in other cases. Things I've tried or considered so far: LOF (Local Outlier Factor) Mixed with KNN: Treating the org as a social graph of access rights, and assuming most people should resemble their neighbors. Works okay, but choosing k (the number of neighbors) is tricky — too small and everything is an outlier; too big and nothing is. Then I tried to map each user to the nearest 10 peers and got the extra rights and missing rights they had, adding to the explainability of the solution. By telling this, [User x is an outlier because they have these [extra] rights or are missing these rights [missing] that their [peers] have. It's working, but I don't know if it is. All of that was done after I reduced the dimensionality of the matrix using SVD up to 90% explained variance to allow the Euclidean distance metric in LOF to somehow mimic cosine distance and avoid [the problem where all of the points are equally far because of the zeroes in the matrix] Clustering after SVD/UMAP: Embed people into a latent space and look for those floating awkwardly in the corner of the entitlement universe. Some light graph work: building bipartite graphs of users ↔ entitlements, then looking for rare or disconnected nodes. But none of it feels quite “safe” — or explainable enough for audit teams who still believe in spreadsheets more than scoring systems. Has anyone tackled something like this? I'm curious about: Better ways to define what “normal” access looks like. Handling inherited vs direct permissions (roles, groups, access policies). Anything that helped you avoid false positives and make results explainable. Treating access as a time series — worth it or not? Isolation Forest? Autoencoders? All I'm trying to do If you've wrangled a permission mess, cleaned up an access jungle, or just have thoughts on how to smell weirdness in high-dimensional RBAC soup — I'm all ears. How would you sniff out an access anomaly before it bites back? submitted by /u/Careful_Engineer_700 [link] [comments]

  • Client told me MS Copilot replicated what I built. It didn’t.
    by /u/melissa_ingle (Data Science) on May 9, 2025 at 4:28 am

    I built three MVP models for a client over 12 weeks. Nothing fancy: an LSTM, a prophet model, and XGBoost. The difficulty, as usual, was getting and understanding the data and cleaning it. The company is largely data illiterate. Turned in all 3 models, they loved it then all of a sudden canceled the pending contract to move them to production. Why? They had a devops person do in MS Copilot Analyst (a new specialized version of MS Copilot studio) and it took them 1 week! Would I like to sign a lesser contract to advise this person though? I finally looked at their code and it’s 40 lines of code using a subset of the California housing dataset run using a Random Forest regressor. They had literally nothing. My advice to them: go f*%k yourself. submitted by /u/melissa_ingle [link] [comments]

  • May be of interest to anyone looking to learn Python with a stats bias
    by /u/bobo-the-merciful (Data Science) on May 9, 2025 at 12:13 am

    submitted by /u/bobo-the-merciful [link] [comments]

  • This is how I got a (potential) offer revoked: A learning lesson
    by /u/Lamp_Shade_Head (Data Science) on May 8, 2025 at 4:16 pm

    I’m based in the Bay Area with 5 YOE. A couple of months ago, I interviewed for a role I wasn’t too excited about, but the pay was super compelling. In the first recruiter call, they asked for my salary expectations. I asked for their range, as an example here, let’s say they said $150K–$180K. I said, “That works, I’m looking for something above $150K.” I think this was my first mistake, more on that later. I am a person with low self esteem(or serious imposter syndrome) and when I say I nailed all 8 rounds, I really must believe that. The recruiter followed up the day after 8th round saying team is interested in extending an offer. Then on compensation expectations the recruiter said, “You mentioned $150K earlier.” I clarified that I was targeting the upper end based on my fit and experience. They responded with, “So $180K?” and I just said yes. It felt a bit like putting words in my mouth. Next day, I got an email saying that I have to wait for the offer decision as they are interviewing other candidates. Haven’t heard back since. I don’t think I did anything fundamentally wrong or if I should have regrets but curious what others think. Edit: Just to clarify, in my mind I thought that’s how negotiations work. They will come back and say can’t do 150 but can do 140. But I guess not. submitted by /u/Lamp_Shade_Head [link] [comments]

  • Code is shit, business wants to scale, what could go wrong?
    by /u/furioncruz (Data Science) on May 8, 2025 at 7:53 am

    A bit of context. I have taken charge of a project recently. It's a product in a client facing app. The implementation of the ML system is messy. The data pipelines consists of many sql codes. These codes contain rather complicated business knowledge. There is airflow that schedules them, so there is observability. This code has been used to run experiments for the past 2 months. I don't know how much firefighting has been going on. But in the past week that I picked up the project, I spent 3 days on firefighting. I understand that, at least theoretically, when scaling, everything that could go wrong goes wrong. But I want to hear real life experiences. When facing such issues, what have you done that worked? Could you find a way to fix code while helping with scaling? Did firefightings get in the way? Any past experience would help. Thanks! submitted by /u/furioncruz [link] [comments]

  • Final verdict on LLM generated confidence scores?
    by /u/sg6128 (Data Science) on May 8, 2025 at 6:05 am

    submitted by /u/sg6128 [link] [comments]

  • The worst thing about being a Data Scientist is that the best you can do you sometimes is not even nearly enough
    by /u/CadeOCarimbo (Data Science) on May 8, 2025 at 5:17 am

    This specially sucks as a consultant. You get hired because some guy from Sales department of the consulting company convinced the client that they would give them a Data Scientist consultant that would solve all their problems and build perfect Machine Learning models. Then you join the client and quickly realize that is literary impossible to do any meaningful work with the poor data and the unjustified expectations they have. As an ethical worker, you work hard and to everything that is possible with the data at hand (and maybe some external data you magically gathered). You use everything that you know and don't know, take some time to study the state of the art, chat with some LLMs on their ideas for the project, run hundreds of different experiments (should I use different sets of features? Should I log transform some numerical features? Should I apply PCA? How many ML algorithms should I try?) And at the end of day... The model still sucks. You overfit the hell of the model, makes a gigantic boosting model with max_depth set as 1000, and you still don't match the dumb manager expectations. I don't know how common that it is in other professions, but an intrinsic thing of working in Data Science is that you are never sure that your work will eventually turn out to be something good, no matter how hard you try. submitted by /u/CadeOCarimbo [link] [comments]

  • If part of your job involves explaining to non-technical coworkers and/or management why GenAI is not always the right approach, how do you do that?
    by /u/TaterTot0809 (Data Science) on May 7, 2025 at 8:49 pm

    Discussion idea inspired by that thread on tools. Bonus points if you've found anything that works on people who really think they understand GenAI but don't understand it's failure points or ways it could steer a company wrong, or those who think it's the solution to every problem. I'm currently a frustrato potato from this so any thoughts are very much appreciated submitted by /u/TaterTot0809 [link] [comments]

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

What are some good datasets for Data Science and Machine Learning?

What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.

Watch a video or find out more here.

Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.

Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.

Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.

Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.

Google Workspace Business Standard Promotion code for the Americas 63F733CLLY7R7MM 63F7D7CPD9XXUVT 63FLKQHWV3AEEE6 63JGLWWK36CP7WM
Email me for more promo codes

Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz

Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals

Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz

Skin Stem Cell Serum

Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel

Can AI Really Predict Lottery Results? We Asked an Expert.

Ace the 2025 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2025 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss human health

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)