What are some ways to increase precision or recall in machine learning?

What are some ways to increase precision or recall in machine learning?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are some ways to increase precision or recall in machine learning?

What are some ways to Boost Precision and Recall in Machine Learning?

Sensitivity vs Specificity?


In machine learning, recall is the ability of the model to find all relevant instances in the data while precision is the ability of the model to correctly identify only the relevant instances. A high recall means that most relevant results are returned while a high precision means that most of the returned results are relevant. Ideally, you want a model with both high recall and high precision but often there is a trade-off between the two. In this blog post, we will explore some ways to increase recall or precision in machine learning.

What are some ways to increase precision or recall in machine learning?
What are some ways to increase precision or recall in machine learning?


There are two main ways to increase recall:

by increasing the number of false positives or by decreasing the number of false negatives. To increase the number of false positives, you can lower your threshold for what constitutes a positive prediction. For example, if you are trying to predict whether or not an email is spam, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in more false positives (emails that are not actually spam being classified as spam) but will also increase recall (more actual spam emails being classified as spam).

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

To decrease the number of false negatives,

you can increase your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in fewer false negatives (actual spam emails not being classified as spam) but will also decrease recall (fewer actual spam emails being classified as spam).

What are some ways to increase precision or recall in machine learning?

There are two main ways to increase precision:

by increasing the number of true positives or by decreasing the number of true negatives. To increase the number of true positives, you can raise your threshold for what constitutes a positive prediction. For example, using the spam email prediction example again, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in more true positives (emails that are actually spam being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).

To decrease the number of true negatives,

you can lower your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example once more, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in fewer true negatives (emails that are not actually spam not being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)
What are some ways to increase precision or recall in machine learning?

To summarize,

there are a few ways to increase precision or recall in machine learning. One way is to use a different evaluation metric. For example, if you are trying to maximize precision, you can use the F1 score, which is a combination of precision and recall. Another way to increase precision or recall is to adjust the threshold for classification. This can be done by changing the decision boundary or by using a different algorithm altogether.

What are some ways to increase precision or recall in machine learning?

Sensitivity vs Specificity

In machine learning, sensitivity and specificity are two measures of the performance of a model. Sensitivity is the proportion of true positives that are correctly predicted by the model, while specificity is the proportion of true negatives that are correctly predicted by the model.

Google Colab For Machine Learning

State of the Google Colab for ML (October 2022)

Google introduced computing units, which you can purchase just like any other cloud computing unit you can from AWS or Azure etc. With Pro you get 100, and with Pro+ you get 500 computing units. GPU, TPU and option of High-RAM effects how much computing unit you use hourly. If you don’t have any computing units, you can’t use “Premium” tier gpus (A100, V100) and even P100 is non-viable.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Google Colab Pro+ comes with Premium tier GPU option, meanwhile in Pro if you have computing units you can randomly connect to P100 or T4. After you use all of your computing units, you can buy more or you can use T4 GPU for the half or most of the time (there can be a lot of times in the day that you can’t even use a T4 or any kinds of GPU). In free tier, offered gpus are most of the time K80 and P4, which performs similar to a 750ti (entry level gpu from 2014) with more VRAM.

For your consideration, T4 uses around 2, and A100 uses around 15 computing units hourly.
Based on the current knowledge, computing units costs for GPUs tend to fluctuate based on some unknown factor.

Considering those:

  1. For hobbyists and (under)graduate school duties, it will be better to use your own gpu if you have something with more than 4 gigs of VRAM and better than 750ti, or atleast purchase google pro to reach T4 even if you have no computing units remaining.
  2. For small research companies, and non-trivial research at universities, and probably for most of the people Colab now probably is not a good option.
  3. Colab Pro+ can be considered if you want Pro but you don’t sit in front of your computer, since it disconnects after 90 minutes of inactivity in your computer. But this can be overcomed with some scripts to some extend. So for most of the time Colab Pro+ is not a good option.

If you have anything more to say, please let me know so I can edit this post with them. Thanks!

Conclusion:


In machine learning, precision and recall trade off against each other; increasing one often decreases the other. There is no single silver bullet solution for increasing either precision or recall; it depends on your specific use case which one is more important and which methods will work best for boosting whichever metric you choose. In this blog post, we explored some methods for increasing either precision or recall; hopefully this gives you a starting point for improving your own models!

 

What are some ways we can use machine learning and artificial intelligence for algorithmic trading in the stock market?

Machine Learning and Data Science Breaking News 2022 – 2023

  • Found a company asking for high school certificates for a Data Scientist role.
    by /u/xandie985 (Data Science) on March 27, 2024 at 10:20 pm

    ​ https://preview.redd.it/2qy68tawbyqc1.png?width=1322&format=png&auto=webp&s=2e9d875eb6fb7d11e14e9e1d7fa91180c6f67eb8 submitted by /u/xandie985 [link] [comments]

  • Causal inference question
    by /u/Amazing_Alarm6130 (Data Science) on March 27, 2024 at 9:47 pm

    I used DoWhy to create some synthetic data. The causal graph is shown below. Treatment is v0 and y is the outcome. True ATE is 10. I also used the DoWhy package to find ATE (propensity score matching) and I obtained ~10, which is great. For fun, I fitted a OLS model (y ~ W1 + W2 + v0 + Z1 + Z2) on the data and, surprisingly the beta for the treatment v0 is 10. I was expecting something different from 10, because of the confounders. What am I missing here? ​ https://preview.redd.it/ve6753p75yqc1.png?width=458&format=png&auto=webp&s=0935bbb15fba1dc63bdb3f8f445dca73fa2988e9 submitted by /u/Amazing_Alarm6130 [link] [comments]

  • Dumb question but do data scientists make an effort to automate there work?
    by /u/Marion_Shepard (Data Science) on March 27, 2024 at 8:37 pm

    Lowly BI person here -- just curious outside of maths, data modeling, and drinking scotch in the library, do data scientists make an effort to automate their work? Like are there tools or scripts you all are building to be more efficient or is it not really a part of the job? submitted by /u/Marion_Shepard [link] [comments]

  • Limited data, need help with analysis
    by /u/bernful (Data Science) on March 27, 2024 at 6:55 pm

    I work for a large chain grocer and I've been tasked with "Missed Opportunity." Missed Opportunity (MO) is defined as such: When a customer wants to buy an item, and the item IS stocked, but is not on the shelf. I.e. in most cases, this translates to the item is in the backrooms. But it could be the case that someone grabbed an item and did not return it to the right place. Now my goal is to look at what items (in the past couple of months) are experiencing the "most" MO, quantified by $ value or units. The limited amount of data I have is sales. I can tell you what time an item was sold, how many units, in what store it was sold, and the price. I do NOT have: anything related to inventory, even delivery dates. I also do NOT have a "true" dataset of actual MO being experienced. ​ Thus, how in the hell do I figure out my goal with this little data??? The only thing that I have been trying is to cluster stores (K-means) based off sales of a particular item, and if the store is underperforming in its cluster, then it could be somewhat assumed that it may be experiencing MO. However, this runs into its own problems and assumptions. So what other statistical methods, techniques, manipulations, etc. could possibly help me here? I feel like I need to get pretty creative submitted by /u/bernful [link] [comments]

  • [D] Thoughts on a blockchain based robot authorisation system
    by /u/d41_fpflabs (Machine Learning) on March 27, 2024 at 6:26 pm

    Robots intended to be used by the general public, with the ability to execute critical tasks must be governed by a trustless, transparent, auditable authorisation system. There are 3 main points of vulnerability for a robot deployed into the real world. Malicious intent from the robot Malicious intent from the robot manufacturer 3.Malicious intent from hackers A blockchain based authorisation system seems like the perfect solution. The blockchain authorisation control system will have 4 fundamental aspects: 1.Soul-bound NFTs Multi-Sig Roles Smart contract events Read the full proposed approach here: https://github.com/dev-diaries41/robo-auth What are you thoughts? submitted by /u/d41_fpflabs [link] [comments]

  • [D] Dataloading from external disk
    by /u/bkffadia (Machine Learning) on March 27, 2024 at 6:17 pm

    Hey there, I am training a deep lesrning model using a dataset of 400Go in an external SSD disk and I noticed that training is very slow, any tricks to make dataloading faster ? PS : I have to use the external disk submitted by /u/bkffadia [link] [comments]

  • [D] How do you measure performance of AI copilot/assistant?
    by /u/n2parko (Machine Learning) on March 27, 2024 at 5:38 pm

    Curious to hear from those that are building and deploying products with AI copilots. How are you tracking the interactions? And are you feeding the interaction back into the model for retraining? Put together a how-to to do this with an OS Copilot (Vercel AI SDK) and Segment and would love any feedback to improve the spec: https://segment.com/blog/instrumenting-user-insights-for-your-ai-copilot/ submitted by /u/n2parko [link] [comments]

  • Is it just me, or have there been a lot of data science job postings lately that require skills in data engineering?
    by /u/trafalgar28 (Data Science) on March 27, 2024 at 4:53 pm

    Not only with job postings, but I know a few individuals who work as data scientists at reputable companies, and often they are tasked with the responsibilities of a data engineer. I believe the issue stems from a lack of data literacy among companies and data managers. In terms of job postings, most of them require extensive experience in SQL, data cleaning, ETL, Pipelines and data quality-related tasks, which I believe fall within the realm of data engineering. I would like to hear your thoughts on this. Have any of you experienced something similar or perhaps dealt with it firsthand? submitted by /u/trafalgar28 [link] [comments]

  • [D] What is the state-of-the-art for 1D signal cleanup?
    by /u/XmintMusic (Machine Learning) on March 27, 2024 at 4:52 pm

    I have the following problem. Imagine I have a 'supervised' dataset of 1D curves with inputs and outputs, where the input is a modulated noisy signal and the output is the cleaned desired signal. Is there a consensus in the machine learning community on how to tackle this simple problem? Have you ever worked on anything similar? What algorithm did you end up using? Example: https://imgur.com/JYgkXEe submitted by /u/XmintMusic [link] [comments]

  • [D] State of the art TTS
    by /u/Zireaone (Machine Learning) on March 27, 2024 at 3:04 pm

    State of the art Tts question Hey! I'm currently working on a project and I'd like to implement speech using TTS, I tried many things and I can't seem to find something that fits my needs, I haven't worked on TTS for a while now so I was wondering if maybe they were newer technologies I could use. Here is what I'm looking for : I need to be be quite fast and without too many sound artifacts (I tried bark and while the possibility of manipulating emotion is quite remarkable the generated voice is full of artifacts and noise) It'd be a bonus if I could stream the audio and pipe it through other things, I'd like to apply an RVC Model on top of it (live) Another 'nice to have' is to have some controls over the emotions or tone of the voice. I tried these so far (either myself or through demos) : TORTOISETTS and EDGETTS seem to have a nice voice quality but are relatively monotone. Bark as I said is very good at emotions and controls but lots of artifacts in the voice, if I have time I'd try to apply postprocessing but idk to what extent it can help OpenAI models don't have much emotions IMO Same as eleven labs I used Uber duck in the past but it seems a lot of fun functionalities disappeared. If you have any advice, suggestion or if you think I should try somethings further feel free to reply! I also want to thanks everyone in advance! Have a nice day! submitted by /u/Zireaone [link] [comments]

  • [D] Data cleaning for classification model
    by /u/fardin__khan (Machine Learning) on March 27, 2024 at 2:42 pm

    Currently working on a classification model, which entails data cleaning. We've got 8000 images categorized into 3 classes. After removing duplicates and corrupted images, what else should we consider? submitted by /u/fardin__khan [link] [comments]

  • [D] Seeking guidance/advice
    by /u/qheeeee (Machine Learning) on March 27, 2024 at 2:14 pm

    Hi, I've finished Andrew Ng's course on Coursera. I think I've got the basics. I've started learning ML for my master's thesis. I want to develop a method to estimate scope 3 emissions. I studied business and I do not have any python background except for a 6-month data analytics bootcamp. I've got the data needed for my thesis, but when I try to work on it, I'm not sure what I'm doing, and ofc a sh*t ton of bugs and errors. Do I need to just keep trying to push through and learn through the experience by working on my thesis or do I need to study more? I've been considering to by a book <\Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow> by Aurelien Geron. Any guidance/recommendation would be much appreciated! submitted by /u/qheeeee [link] [comments]

  • [P] Insta Face Swap
    by /u/abdullahozmntr (Machine Learning) on March 27, 2024 at 2:03 pm

    ComfyUI node repo: https://github.com/abdozmantar/ComfyUI-InstaSwap Standalone repo: https://github.com/abdozmantar/Standalone-InstaSwap ​ ​ https://i.redd.it/9d4ti20fvvqc1.gif submitted by /u/abdullahozmntr [link] [comments]

  • [D] Seeking Advice
    by /u/MD24IB (Machine Learning) on March 27, 2024 at 1:45 pm

    I'm currently pursuing my undergraduate degree in robotics engineering and have been immersing myself in concepts related to machine learning, deep learning, and computer vision, both modern and traditional. With strong programming skills and a habit of regularly reading research papers, I'm eager to understand the job landscape in my field and pursue a Phd. Are there ample opportunities available? What can I expect in terms of salaries and future prospects? Additionally, I'm curious about the comparative job market between natural language processing (NLP) and computer vision. Given my background and interests, what areas or skills should I focus on learning to enhance my career prospects? Thanks in advance for your time and advice. submitted by /u/MD24IB [link] [comments]

  • UPDATE #3: I built an app to make my job search a little more sane, and I thought others might like it too! No ads, no recruiter spam, etc.
    by /u/eipi-10 (Data Science) on March 27, 2024 at 1:41 pm

    Hey again everyone! ​ Checking back in with more updates on Zen because of how enthusiastic the community has been about it! We've done a lot of work the past two months or so since I last posted, but first I'll drop a couple of the most important things / highlights about the app here: ​ Zen is still a candidate / seeker-first job board. This means we have no ads, we have no promoted jobs from companies who are paying us, we have no recruiters, etc. The whole point of Zen is to help you find jobs quickly at companies you're interested in without any headaches. On that point, we'll send you emails notifying you when companies you care about post new jobs that match your preferences, so you don't need to continuously check their job boards. ​ In the past two months, we've made some major changes! Many of them are discussed in the changelog: We've continued adding postings and companies, so you can now explore over 170k open jobs at >6,200 companies We've continued to completely overhaul the UX of the app We've added some new preference filters to help you filter for relevant jobs better We've launched a premium tier. The reason for this was as we've grown (largely thanks to all of your support!) our costs have continued to go up significantly, and we want to be able to keep providing an ad-free, spam-free, promotion-free service to all of you without making any compromises. We're launching on ProductHunt today! You can check out our launch here ​ I started building Zen when I was on the job hunt and realized it was harder than it should've been to just get notifications when a company I was interested in posted a job that was relevant to me. And we hope that this goal -- to cut out all the noise and make it easier for you to find great matches -- is valuable for everyone here 🙂 Here are the original posts: https://www.reddit.com/r/datascience/comments/1ad5lxa/update_2_i_built_an_app_to_make_my_job_search_a/ https://www.reddit.com/r/datascience/comments/183562x/update_i_built_an_app_to_make_my_job_search_a/ https://www.reddit.com/r/datascience/comments/17s5fyq/i_built_an_app_to_make_my_job_search_a_little/ ​ And here's one more link to the app submitted by /u/eipi-10 [link] [comments]

  • [N] Introducing DBRX: A New Standard for Open LLM
    by /u/artificial_intelect (Machine Learning) on March 27, 2024 at 1:35 pm

    https://x.com/vitaliychiley/status/1772958872891752868?s=20 Shill disclaimer: I was the pretraining lead for the project DBRX deets: 16 Experts (12B params per single expert; top_k=4 routing) 36B active params (132B total params) trained for 12T tokens 32k sequence length training submitted by /u/artificial_intelect [link] [comments]

  • [D] Seeking Advice: Transitioning to Low-Level Implementations in AIoT Systems - Where to Start?
    by /u/MaTwickenham (Machine Learning) on March 27, 2024 at 1:20 pm

    Hello everyone, I'm a prospective graduate student who will be starting my studies in September this year, specializing in AIoT (Artificial Intelligence of Things) Systems. Recently, I've been reading papers from journals like INFOCOM and SIGCOMM, and I've noticed that they mostly focus on relatively low-level aspects of operating systems, including GPU/CPU scheduling, optimization of deep learning model inference, operator optimization, cross-platform migration, and deployment. I find it challenging to grasp the implementation details of these works at the code level. When I looked at the implementations of these works uploaded on GitHub, I found it relatively difficult to understand. My primary programming languages are Java and Python. During my undergraduate studies, I gained proficiency in implementing engineering projects and ideas using Python, especially in the fields of deep learning and machine learning. However, I lack experience and familiarity with C/C++ (many of the aforementioned works are based on C/C++). Therefore, I would like to ask for advice from senior professionals and friends on which areas of knowledge I should focus on. Do I need to learn CUDA programming, operating system programming, or other directions? Any recommended learning paths would be greatly appreciated. PS: Recently, I have started studying the MIT 6.S081 Operating System Engineering course. Thank you all sincerely for your advice. submitted by /u/MaTwickenham [link] [comments]

  • [P] Run AI & ML workflows locally from your Mac desktop
    by /u/creatorai (Machine Learning) on March 27, 2024 at 1:08 pm

    Hi all - I wanted to share an app I’ve been working on with a small team over the past year that I thought this community would be interested in. Odyssey is a completely native Mac app for creating remarkable art, getting work done, and automating repetitive tasks with the power of AI and machine learning models. We just made a major feature update and added the ability to create your own Widgets. Odyssey Widgets are fully interactive mini applications that live in their own windows or panels and are driven by a workflow. This means you can take a workflow you create with Odyssey and add it directly to your desktop. So, as an example, you could generate an image, chat with locally run chatbot, run bulk image processing, etc. straight from your desktop without even opening the Odyssey app. Widgets can be built with Odyssey and triggered from the Odyssey logo in your Mac’s menu. https://i.redd.it/8s9s6i0clvqc1.gif We're in public beta but here's a full list of everything Odyssey supports: Image generation and processing Run Stable Diffusion 1.5, SDXL, SDXL Lightning, and SDXL Turbo locally or connect your Stable Diffusion API key Add custom models & LoRAs ControlNet support including canny edges, pose detection, depth estimation, and QR Code Monster Inpainting and outpainting Super resolution models (Best Buddy GAN, Ultrasharp 4x, Remacri, and ESRGAN) Multiple image segmentation models Erase objects Dozens of image processing nodes including aspect ratio, resizing, and extracting dominant colors Custom image transitions for powerful slideshows Large language models and math equations Run Llama2 locally or connect your ChatGPT API key Supports both chatbot mode and instructions mode Solver node for word problems and math nodes for complex equations Lots of updates coming here in the next few weeks Automation and batch workflows Batch image and text nodes support hundreds of images and lines of text at once Remove backgrounds, upscale, change aspect ratios, and run dozens of image processors in bulk Private, customizable, and shareable No images, chats, or inputs are stored or accessible by the Odyssey team Completely private and secure. The only tracking is anonymized usage data to help us improve Odyssey Process your own data entirely locally No internet connection required to run local models Use your own API keys for ChatGPT and Stable Diffusion Easily save and share custom workflows What’s coming soon: Custom LLMs & more text processing nodes - we are adding support for bringing in custom LLMs, document uploads, and more Batch text and workflow automation - we are building in document upload, batch text support, and an integration with Apple shortcuts Plug-in support - we are opening up the Odyssey to 3P developers. If you’re interested, please reach out - would love to learn more from you as we work on building this out Feel free to reach out to [john@odysseyapp.io](mailto:john@odysseyapp.io) if you have any questions or feedback. submitted by /u/creatorai [link] [comments]

  • [P] Hybrid-Net: Real-time audio source separation, generate lyrics, chords, beat.
    by /u/CheekProfessional146 (Machine Learning) on March 27, 2024 at 12:11 pm

    Project: https://github.com/DoMusic/Hybrid-Net A transformer-based hybrid multimodal model, various transformer models address different problems in the field of music information retrieval, these models generate corresponding information dependencies that mutually influence each other. An AI-powered multimodal project focused on music, generate chords, beats, lyrics, melody, and tabs for any song. submitted by /u/CheekProfessional146 [link] [comments]

  • [P] Visualize RAG Data
    by /u/DocBrownMS (Machine Learning) on March 27, 2024 at 10:29 am

    Hey all, I've recently published a tutorial at Towards Data Science that explores a somewhat overlooked aspect of Retrieval-Augmented Generation (RAG) systems: the visualization of documents and questions in the embedding space: https://towardsdatascience.com/visualize-your-rag-data-evaluate-your-retrieval-augmented-generation-system-with-ragas-fc2486308557 While much of the focus in RAG discussions tends to be on the algorithms and data processing, I believe that visualization can help to explore the data and to gain insights into problematic subgroups within the data. This might be interesting for some of you, although I'm aware that not everyone is keen on this kind of visualization. I believe it can add a unique dimension to understanding RAG systems. submitted by /u/DocBrownMS [link] [comments]

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

What are some good datasets for Data Science and Machine Learning?

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!