What are some ways to increase precision or recall in machine learning?

What are some ways to increase precision or recall in machine learning?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are some ways to increase precision or recall in machine learning?

What are some ways to Boost Precision and Recall in Machine Learning?

Sensitivity vs Specificity?


In machine learning, recall is the ability of the model to find all relevant instances in the data while precision is the ability of the model to correctly identify only the relevant instances. A high recall means that most relevant results are returned while a high precision means that most of the returned results are relevant. Ideally, you want a model with both high recall and high precision but often there is a trade-off between the two. In this blog post, we will explore some ways to increase recall or precision in machine learning.

What are some ways to increase precision or recall in machine learning?
What are some ways to increase precision or recall in machine learning?


There are two main ways to increase recall:

by increasing the number of false positives or by decreasing the number of false negatives. To increase the number of false positives, you can lower your threshold for what constitutes a positive prediction. For example, if you are trying to predict whether or not an email is spam, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in more false positives (emails that are not actually spam being classified as spam) but will also increase recall (more actual spam emails being classified as spam).

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

To decrease the number of false negatives,

you can increase your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in fewer false negatives (actual spam emails not being classified as spam) but will also decrease recall (fewer actual spam emails being classified as spam).

What are some ways to increase precision or recall in machine learning?

There are two main ways to increase precision:

by increasing the number of true positives or by decreasing the number of true negatives. To increase the number of true positives, you can raise your threshold for what constitutes a positive prediction. For example, using the spam email prediction example again, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in more true positives (emails that are actually spam being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).

To decrease the number of true negatives,

you can lower your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example once more, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in fewer true negatives (emails that are not actually spam not being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)
What are some ways to increase precision or recall in machine learning?

To summarize,

there are a few ways to increase precision or recall in machine learning. One way is to use a different evaluation metric. For example, if you are trying to maximize precision, you can use the F1 score, which is a combination of precision and recall. Another way to increase precision or recall is to adjust the threshold for classification. This can be done by changing the decision boundary or by using a different algorithm altogether.

What are some ways to increase precision or recall in machine learning?

Sensitivity vs Specificity

In machine learning, sensitivity and specificity are two measures of the performance of a model. Sensitivity is the proportion of true positives that are correctly predicted by the model, while specificity is the proportion of true negatives that are correctly predicted by the model.

Google Colab For Machine Learning

State of the Google Colab for ML (October 2022)

Google introduced computing units, which you can purchase just like any other cloud computing unit you can from AWS or Azure etc. With Pro you get 100, and with Pro+ you get 500 computing units. GPU, TPU and option of High-RAM effects how much computing unit you use hourly. If you don’t have any computing units, you can’t use “Premium” tier gpus (A100, V100) and even P100 is non-viable.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Google Colab Pro+ comes with Premium tier GPU option, meanwhile in Pro if you have computing units you can randomly connect to P100 or T4. After you use all of your computing units, you can buy more or you can use T4 GPU for the half or most of the time (there can be a lot of times in the day that you can’t even use a T4 or any kinds of GPU). In free tier, offered gpus are most of the time K80 and P4, which performs similar to a 750ti (entry level gpu from 2014) with more VRAM.

For your consideration, T4 uses around 2, and A100 uses around 15 computing units hourly.
Based on the current knowledge, computing units costs for GPUs tend to fluctuate based on some unknown factor.

Considering those:

  1. For hobbyists and (under)graduate school duties, it will be better to use your own gpu if you have something with more than 4 gigs of VRAM and better than 750ti, or atleast purchase google pro to reach T4 even if you have no computing units remaining.
  2. For small research companies, and non-trivial research at universities, and probably for most of the people Colab now probably is not a good option.
  3. Colab Pro+ can be considered if you want Pro but you don’t sit in front of your computer, since it disconnects after 90 minutes of inactivity in your computer. But this can be overcomed with some scripts to some extend. So for most of the time Colab Pro+ is not a good option.

If you have anything more to say, please let me know so I can edit this post with them. Thanks!

Conclusion:


In machine learning, precision and recall trade off against each other; increasing one often decreases the other. There is no single silver bullet solution for increasing either precision or recall; it depends on your specific use case which one is more important and which methods will work best for boosting whichever metric you choose. In this blog post, we explored some methods for increasing either precision or recall; hopefully this gives you a starting point for improving your own models!

 

What are some ways we can use machine learning and artificial intelligence for algorithmic trading in the stock market?

Machine Learning and Data Science Breaking News 2022 – 2023

  • Looking for PartTime Freelance / Partnership [D]
    by /u/Regular-Connection46 (Machine Learning) on April 13, 2024 at 9:17 pm

    I am independent software developer (contractor) and in my line of work I primarily build software for my clients. However, for all my clients I also build ML models for them. I have worked on customer churn models, time series predictions and currently built and maintain NLP classification models for a forum with thousands of users. I have 10-20 hours weekly and would like to work on something interesting on a freelance or revenue share model. Let me know if you have an opening or interested. submitted by /u/Regular-Connection46 [link] [comments]

  • [R] Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
    by /u/jdogbro12 (Machine Learning) on April 13, 2024 at 8:32 pm

    submitted by /u/jdogbro12 [link] [comments]

  • Advice and Tips to a Newbie?
    by /u/Excellent-Pay6235 (Data Science) on April 13, 2024 at 4:40 pm

    I am going to be graduating from my Master's degree in IT a few months and plan to proceed with a career in this field. I have heard that the job market is pretty tough right now, so I am planning to upskill to help me get that first job. On that regard, I was wondering if there are any good project suggestions which can boost a CV for an entry level job in data analytics? Are there any other important resume suggestions you all can share? If you all have suggestions for skills which can really help me to get that first job in this tough market, I am all ears as well. I have worked on a few models using Python and am comfortable working with SQL. Currently working on my dissertation and trying to learn Tableau (just started though). On a side note, I have had people recommend me to try for data engineering jobs as well. Would you all recommend trying for both? What are the additional skills which I should try to learn in order to try out for data engineering roles? Any advice and tips from you all experienced folks is deeply appreciated! If it helps in any manner, I am based in India. submitted by /u/Excellent-Pay6235 [link] [comments]

  • [P] SDK for connecting secure open-source code interpreter to any LLM
    by /u/mlejva (Machine Learning) on April 13, 2024 at 4:20 pm

    submitted by /u/mlejva [link] [comments]

  • RecurrentGemma: Moving Past Transformers for Efficient Open Language Models [R]
    by /u/we_are_mammals (Machine Learning) on April 13, 2024 at 4:11 pm

    submitted by /u/we_are_mammals [link] [comments]

  • FNN to predict improper vouchers.
    by /u/CaptainVJ (Data Science) on April 13, 2024 at 4:02 pm

    I am an auditor for a state agency, we audit payments the state makes every day to find improper voucher. We get about 30,000 vouchers a day so obviously we can’t audit all of them. So we set up certain risks associated with vouchers to try and better find improper payments. And sometime we have filters for payments that meet certain criteria that must get audited. However, our risk based design doesn’t really work, it’s just a chance of whether or not the vouchers selected for audited are improper or not. I don’t believe we have any better outcome that just randomly selecting a voucher everyday. It just depends on the risks the auditors look for and how well they look at it. However, I am trying to create a statistical model to find these improper vouchers based on these risks. As opposed to what some auditor thinks is the best risk, the model can look at all these risks and see how they interact and if there is some pattern. Additionally, a lot of these risks have some arbitrary cut off date. For example, we might have a risk saying the specific vendor hasn’t been audited in over a year. That’s considered risky, however, a voucher that misses that by one year wouldn’t be rated as risky. So doing this we can turn some categorical variables into continuous variables. The data set as of now is about 600,000 vouchers that have been audited over a ten year span. Currently about 8% of them have been rejected. But not all of the rejected ones were necessarily bad. We have two classes non compliance and saving. Savings are when the money is not due or at least some of it, bad math on the invoice, incorrect charges and so one. While non compliance don’t really save any money it’s just some account error, maybe they paid from the wrong funds, referenced the wrong contract or something. It’s gonna mess up the accounting system but not really save any money. About 20% of rejected vouchers have saving and 80% are non compliant. Obviously our goal is to identify vouchers that yield a saving. Even if we had a model that can predict all the improper ones, we just don’t have the resource to audit all of them. So my thoughts were to create a model fine tuned to have low false positive. Basically I would have a penalizing model for instances of an okay voucher being marked as improper. Obviously we’d miss some improper vouchers from that but we also don’t have the resource to audit them all anyway so my thought is this would allow us to focus on those that might be improper. Just wondering if you guys have. Any thoughts on this. submitted by /u/CaptainVJ [link] [comments]

  • [P] A library for machine learning
    by /u/NoteDance (Machine Learning) on April 13, 2024 at 3:53 pm

    Hello everyone, this machine learning library allows you to build neural networks like using PyTorch that can be trained with TensorFlow. https://github.com/NoteDance/Note submitted by /u/NoteDance [link] [comments]

  • Feedback on response: What realistically will be automated in the next 5 years for data scientists/ML engineers?
    by /u/Legitimate_Source614 (Data Science) on April 13, 2024 at 3:08 pm

    I had responded to Reddit thread here I was completely blown away with the traction my response received. I wanted to thank everyone who took the time to read and share there thoughts. I would also appreciate if folks could share constructive feedback for me on the writing. I have a very small tech blog that I’ve been wanting to write on for a while now. I wasn’t sure where to start or what topics I should focus on first. I decided that I with all the engagement of that I would try to unpack the advice on the blog, which can be found here. The website hasn’t had much work on it, not really looking for feedback on the website itself, cause I know it needs work. I’m looking for feedback about the blog post and about the content within it? I would also like to hear about what topics you as a reader might be interested in reading about. Thank you, in advance for your feedback and I hope you have a great weekend ahead. submitted by /u/Legitimate_Source614 [link] [comments]

  • [P] Sudoku Solver Using Parallel Simulated Annealing
    by /u/Stunning_Ad_1539 (Machine Learning) on April 13, 2024 at 2:55 pm

    submitted by /u/Stunning_Ad_1539 [link] [comments]

  • [P] Segmenting Footballers from live footage - Deep dive into UNET.
    by /u/AvvYaa (Machine Learning) on April 13, 2024 at 2:48 pm

    Sharing a YT video discussing the strengths of UNET for the various inductive biases it adds onto generic CNNs that makes it great for image segmentation. Enjoy! submitted by /u/AvvYaa [link] [comments]

  • Where do you guys apply for jobs in uk?
    by /u/Timely-Cupcake-3983 (Data Science) on April 13, 2024 at 2:31 pm

    I’ve been using LinkedIn but haven’t got much success, I’m not sure if it’s because I’m unqualified (BSc from top 20 unis with 2ye), the markets tough or if I’m on the wrong site. Where do you guys apply for roles? Im based in London currently. I tried going to networking events, I attended big data London last year but the only people I met were trying to sell me storage solutions. Are there any networking events you’d recommend? submitted by /u/Timely-Cupcake-3983 [link] [comments]

  • Analyze pdf files and Identify attributes [D]
    by /u/spar_hawk13 (Machine Learning) on April 13, 2024 at 2:31 pm

    Hi. I have a task where I need to analyze a large number of pdf files and identify certain attributes of the file as below. 1. How many pages have text in multiple columns 2. How many pages have text in a language other than English 3. How many pages have tables (or cells within tables) with shading? 4. How many pages have Drop Caps? 5. How many pages have text outside the normal margins? Any ideas on how I can do this? submitted by /u/spar_hawk13 [link] [comments]

  • Looking for a decision-making framework
    by /u/Ciasteczi (Data Science) on April 13, 2024 at 1:57 pm

    I'm a data analyst working for a loan lender/servicer startup. I'm the first statistician they hired for a loan servicing department and I think I might be reinventing a wheel here. The most common problem at my work is asking "we do X to make a borrower perform better. Should we be doing that?" For example when a borrower stops paying, we deliver a letter to their property. I performed a randomized A/B test and checked if such action significantly lowers a probability of a default using a two-sample binomial test. I also used Bayesian hypothesis testing for some similar problems. However, this problem gets more complicated. For example, say we have four different campaigns to prevent the default, happening at various stages of delinquency and we want to learn about the effectiveness of each of these four strategies. The effectiveness of the last (fourth) campaign could be underestimated, because the current effect is conditional on the previous three strategies not driving any payments. Additionally, I think I'm asking a wrong question most of the time. I don't think it's essential to know if experimental group performs better than control at alpha=0.05. It's rather the opposite: we are 95% certain that a campaign is not cost-effective and should be retired? The rough prior here is "doing something is very likely better than doing nothing " As another example, I tested gift cards in the past for some campaigns: "if you take action A you will get a gift card for that." I run A/B testing again. I assumed that in order to increase the cost-effectives of such gift card campaign, it's essential to make this offer time-constrained, because the more time a client gets, the more likely they become to take a desired action spontaneously, independently from the gift card incentive. So we pay for something the clients would have done anyway. Is my thinking right? Should the campaign be introduced permanently only if the test shows that we are 95% certain that the experimental group is more cost-effective than the control? Or is it enough to be just 51% certain? In other words, isn't the classical frequentist 0.05 threshold too conservative for practical business decisions? Am I even asking the right questions here? Is there a widely used framework for such problem of testing sequential treatments and their cost-effectivess? How to randomize the groups, given that applying the next treatment depends on the previous treatment not being effective? Maybe I don't even need control groups, just a huge logistic regression model to eliminate the impact of the covariates? Should I be 95% certain we are doing good or 95% certain we are doing bad (smells frequentist) or just 51% certain (smells bayesian) to take an action? submitted by /u/Ciasteczi [link] [comments]

  • [D] How would you go on creating an open-source Aqua Voice?
    by /u/oulipo (Machine Learning) on April 13, 2024 at 1:44 pm

    I saw the launch HN post of Aqua Voice https://withaqua.com/ which is really nice, and since such a tool would really be beneficial to the open-source community, I was wondering how to build one I had a few ideas, but wondering what other people here think of those, or whether you have better ones? And perhaps some people would like to start an open-source effort to build an open version of such a tool? First version My thinking would be to first try a "v0" version which uses no custom model, and relies on commercial STT (Whisper) and NLP (ChatGPT) It would go this way: record the user and continuously (streaming) convert to text using the STT use some Voice Activity Detection to detect blanks / split on output sentences to create "blocks" that could be processed incrementally the model would have two states : all the detected blocks until now, and the current "text output" after each block has been detected, a first LLM model could be used to transform the block in an instruction (eg "Make the first bullet point in bold") then a second LLM would take both the current "text output" and the "new instruction", and produce a new "text output" the two LLMs could be just a call to ChatGPT with some instructions to prime it (eg "the user said this: blablabla. transform it to instructions to modify an existing text block", or "this is the current state of the text as markdown blablabla, apply the following instruction and output the transformed text as markdown: blablabla) Second version A more elaborate version could use custom models (particularly custom designed LLMs or other NLP models), and work internally on an Abstract Syntax Tree of the markdown documents (eg explicitly representing text as list of raw text, or "styled text" sections, or "numbered list" sections, etc), and then having the custom LLM apply transforms directly to that representation to make it more efficient Happy to hear your thoughts submitted by /u/oulipo [link] [comments]

  • [D] Full Solutions to Deep Learning : Foundations and Concepts Book by Christopher Bishop and Hugh Bishop ?
    by /u/UniquelyCommonMystic (Machine Learning) on April 13, 2024 at 12:48 pm

    As far as im aware there are no offiial solutions to the exercises in the book, however are there any unoffical ones out there ? submitted by /u/UniquelyCommonMystic [link] [comments]

  • Enhancing Weather Forecast Accuracy: Exploring Regression Models with Multi-source Data Integration
    by /u/Rich-Effect2152 (Data Science) on April 13, 2024 at 12:08 pm

    I am currently working as a data scientist at a new energy startup, mainly responsible for predicting photovoltaic power generation every 15 minutes for the next day. The key data relied upon are weather forecasts, especially the predicted solar irradiance values. Currently, we have data from five numerical weather forecasts, which include fields such as irradiance, temperature, and humidity. The accuracy of the forecasts varies among different data sources, and there are certain discrepancies with the actual weather. I am considering merging the five sets of data to obtain a more accurate weather forecast. Can I use a regression model to fit the actual weather using the five sets of weather forecast data? Is there a better method available? submitted by /u/Rich-Effect2152 [link] [comments]

  • What field/skill in data science do you think cannot be replaced by AI?
    by /u/Mission-Language8789 (Data Science) on April 13, 2024 at 10:13 am

    Title. submitted by /u/Mission-Language8789 [link] [comments]

  • [R] New Python packages to optimise LLMs
    by /u/AstraMindAI (Machine Learning) on April 13, 2024 at 10:10 am

    Hello everyone!!! We are a small research group and would like to share with you our latest Python packages. The first is BitMat, designed to optimise matrix multiplication operations using custom Triton kernels. Our package exploits the principles outlined in the "1bit-LLM Era" document. The second is Mixture-of-depths an implementation of Google DeepMind paper: 'Mixture-of-Depths: Dynamically Allocating the compute in transformer-based language models', which introduces a new approach to managing computational resources in transformer-based language models. Let us know what you think! submitted by /u/AstraMindAI [link] [comments]

  • [Project] New High Performance Tsetlin Machine Implementation in Julia
    by /u/ArtemHnilov (Machine Learning) on April 13, 2024 at 9:28 am

    submitted by /u/ArtemHnilov [link] [comments]

  • [D] Folks here have no idea how competitive top PhD program admissions are these days, wow...
    by /u/MLPhDStudent (Machine Learning) on April 13, 2024 at 8:29 am

    I'm a CS PhD student at Stanford specializing in ML/NLP, and I see the profiles of everyone admitted to our school (and similar top schools) these days since I'm right in the center of everything (and have been for years). I'm reading the comments on the other thread and honestly shocked. So many ppl believe the post is fake and I see comments saying things like "you don't even need top conference papers to get into top PhD programs" (this is so wrong). I feel like many folks here are not up-to-date with just how competitive admissions are to top PhD programs these days... In fact I'm not surprised at all about that post. The top programs look at much more than simply publications. Incredibly strong LOR from famous/respected professors and personal connections to the faculty you want to work with are MUCH more important. Based on what they said (how they worked on the papers by themselves and don't have good recs), they have neither of these two most important things... FYI most of the 6-7 NLP PhD admits to Stanford in my year (2022) had 7+ top conference papers (some with best paper awards), hundreds of citations, tons of research exp, masters at top schools like CMU or UW or industry/AI residency experience at top companies like Google or OpenAI, rec letters from famous researchers in the world, personal connections, research awards, talks for top companies or at big events/conferences, etc... Many of us basically looked like postdocs or junior professors to simply get into the PhD program. Y'all have to realize these top programs are choosing the top students to admit from the entire world. The folks in the comments have no idea how competitive NLP is (which I assume is the original OP's area since they mentioned EMNLP). Keep in mind this was 2022 before the ChatGPT boom too, so things now are probably even more competitive... Also pasting a comment I wrote on a similar thread months back: "PhD admissions are incredibly competitive, especially at top schools. Most admits to top ML PhD programs these days have multiple publications, numerous citations, incredibly strong LoR from respected researchers/faculty, personal connections to the faculty they want to work with, other research-related activities and achievements/awards, on top of a good GPA and typically coming from a top school already for undergrad/masters. Don't want to scare/discourage you but just being completely honest and transparent. It gets worse each year too (competition rises exponentially), and I'm usually encouraging folks who are just getting into ML research (with hopes/goals of pursuing a PhD) with no existing experience and publications to maybe think twice about it or consider other options tbh. It does vary by subfield though. For example, areas like NLP and vision are incredibly competitive, but machine learning theory is relatively less so." Edit1: FYI I don't agree with this either. It's insanely unhealthy and overly competitive (I basically had to kill myself to get in). However there's no choice when the entire world is working so hard in this field and there's so many ppl in it... These top programs admit the best people due to limited spots, and they can't just reject better people for others. Edit2: some folks saying u don't need so many papers/accomplishments to get in. That's true if you have personal connections or incredibly strong letters from folks that know the target faculty well. In most cases this is not the case, so you need more pubs to boost your profile. Honestly these days, you usually need both (connections/strong letters plus papers/accomplishments). Edit3: for folks asking about quality over quantity, I'd say quantity helps you get through the earlier admission stages (as there are way too many applicants so they have to use "easy/quantifiable metrics" to filter like number of papers - unless you have things like connections or strong letters from well-known researchers), but later on it's mainly quality and research fit, as individual faculty will review profiles of students (and even read some of their papers in-depth) and conduct 1-on-1 interviews. So quantity is one thing that helps get you to the later stages, but quality (not just of your papers, but things like rec letters and your actual experience/potential) matters much more for the final admission decision. Edit4: like I said, this is field/area dependent. CS as a whole is competitive, but ML/AI is another level. Then within ML/AI, areas like NLP and Vision are ridiculous. It also depends what schools and labs/profs you are targeting, research fit, connections, etc. Not a one size fits all. But my overall message is that things are just crazy competitive these days as a whole, although there will be exceptions. Edit5: not meant to be discouraging as much as honest and transparent so folks know what to expect and won't be as devastated with results, and also apply smarter (e.g. to more schools/labs including lower-ranked ones and to industry positions). Better to keep more options open in such a competitive field during these times... Edit6: IMO most important things for top ML PhD admissions: connections and research fit with the prof >= rec letters (preferably from top researchers or folks the target faculty know well) > publications (quality) > publications (quantity) >= your overall research experiences and accomplishments > SOP (as long as overall research fit, rec letters, and profile are strong, this is less important imo as long as it's not written poorly) >>> GPA (as long as it's decent and can make the normally generous cutoff you'll be fine) >> GRE/whatever test scores (normally also cutoff based and I think most PhD programs don't require them anymore since Covid) submitted by /u/MLPhDStudent [link] [comments]

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

What are some good datasets for Data Science and Machine Learning?

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!