What are some ways to increase precision or recall in machine learning?

What are some ways to increase precision or recall in machine learning?

You can translate the content of this page by selecting a language in the select box.

What are some ways to increase precision or recall in machine learning?

What are some ways to Boost Precision and Recall in Machine Learning?

Sensitivity vs Specificity?


In machine learning, recall is the ability of the model to find all relevant instances in the data while precision is the ability of the model to correctly identify only the relevant instances. A high recall means that most relevant results are returned while a high precision means that most of the returned results are relevant. Ideally, you want a model with both high recall and high precision but often there is a trade-off between the two. In this blog post, we will explore some ways to increase recall or precision in machine learning.

What are some ways to increase precision or recall in machine learning?
What are some ways to increase precision or recall in machine learning?


There are two main ways to increase recall:

by increasing the number of false positives or by decreasing the number of false negatives. To increase the number of false positives, you can lower your threshold for what constitutes a positive prediction. For example, if you are trying to predict whether or not an email is spam, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in more false positives (emails that are not actually spam being classified as spam) but will also increase recall (more actual spam emails being classified as spam).

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

To decrease the number of false negatives,

you can increase your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in fewer false negatives (actual spam emails not being classified as spam) but will also decrease recall (fewer actual spam emails being classified as spam).


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence
What are some ways to increase precision or recall in machine learning?

There are two main ways to increase precision:

by increasing the number of true positives or by decreasing the number of true negatives. To increase the number of true positives, you can raise your threshold for what constitutes a positive prediction. For example, using the spam email prediction example again, you might raise the threshold for what constitutes spam so that fewer emails are classified as spam. This will result in more true positives (emails that are actually spam being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).

To decrease the number of true negatives,

you can lower your threshold for what constitutes a positive prediction. For example, going back to the spam email prediction example once more, you might lower the threshold for what constitutes spam so that more emails are classified as spam. This will result in fewer true negatives (emails that are not actually spam not being classified as spam) but will also decrease precision (more non-spam emails being classified as spam).

What are some ways to increase precision or recall in machine learning?

To summarize,

there are a few ways to increase precision or recall in machine learning. One way is to use a different evaluation metric. For example, if you are trying to maximize precision, you can use the F1 score, which is a combination of precision and recall. Another way to increase precision or recall is to adjust the threshold for classification. This can be done by changing the decision boundary or by using a different algorithm altogether.

What are some ways to increase precision or recall in machine learning?

Sensitivity vs Specificity

In machine learning, sensitivity and specificity are two measures of the performance of a model. Sensitivity is the proportion of true positives that are correctly predicted by the model, while specificity is the proportion of true negatives that are correctly predicted by the model.

Google Colab For Machine Learning

State of the Google Colab for ML (October 2022)

Google introduced computing units, which you can purchase just like any other cloud computing unit you can from AWS or Azure etc. With Pro you get 100, and with Pro+ you get 500 computing units. GPU, TPU and option of High-RAM effects how much computing unit you use hourly. If you don’t have any computing units, you can’t use “Premium” tier gpus (A100, V100) and even P100 is non-viable.

Google Colab Pro+ comes with Premium tier GPU option, meanwhile in Pro if you have computing units you can randomly connect to P100 or T4. After you use all of your computing units, you can buy more or you can use T4 GPU for the half or most of the time (there can be a lot of times in the day that you can’t even use a T4 or any kinds of GPU). In free tier, offered gpus are most of the time K80 and P4, which performs similar to a 750ti (entry level gpu from 2014) with more VRAM.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLFC01 book below.


Achieve AWS Solutions Architect Associate Certification with Confidence: Master SAA Exam with the Latest Practice Tests and Quizzes illustrated

For your consideration, T4 uses around 2, and A100 uses around 15 computing units hourly.
Based on the current knowledge, computing units costs for GPUs tend to fluctuate based on some unknown factor.

Considering those:

"Become a Canada Expert: Ace the Citizenship Test and Impress Everyone with Your Knowledge of Canadian History, Geography, Government, Culture, People, Languages, Travel, Wildlife, Hockey, Tourism, Sceneries, Arts, and Data Visualization. Get the Top 1000 Canada Quiz Now!"


  1. For hobbyists and (under)graduate school duties, it will be better to use your own gpu if you have something with more than 4 gigs of VRAM and better than 750ti, or atleast purchase google pro to reach T4 even if you have no computing units remaining.
  2. For small research companies, and non-trivial research at universities, and probably for most of the people Colab now probably is not a good option.
  3. Colab Pro+ can be considered if you want Pro but you don’t sit in front of your computer, since it disconnects after 90 minutes of inactivity in your computer. But this can be overcomed with some scripts to some extend. So for most of the time Colab Pro+ is not a good option.

If you have anything more to say, please let me know so I can edit this post with them. Thanks!

Conclusion:


In machine learning, precision and recall trade off against each other; increasing one often decreases the other. There is no single silver bullet solution for increasing either precision or recall; it depends on your specific use case which one is more important and which methods will work best for boosting whichever metric you choose. In this blog post, we explored some methods for increasing either precision or recall; hopefully this gives you a starting point for improving your own models!

 

What are some ways we can use machine learning and artificial intelligence for algorithmic trading in the stock market?

Machine Learning and Data Science Breaking News 2022 – 2023

  • [P] recommend me a python algo for text based keyword extraction
    by /u/BigMickDo (Machine Learning) on May 30, 2023 at 6:05 pm

    so I'm not a DS/MLE or anything, so not very technical, but I do work with data. I'm looking to scrape job posts (few thousand or something), get their descriptions, and extract the keyword to optimize my resume for ATS. do you have any recommendation for something similar like this? ​ I did something similar last year or year before, IIRC i tried few things like RAKE and something similar named, but ended up using a lib called adv tools or advertising tools. I think what I did before was remove stop words, extract root words, tried different settings (between one and four words). I ended up settling for 3 word match up based on what i saw from the top 100 results for each group, then manually cleaned up the keywords. submitted by /u/BigMickDo [link] [comments]

  • Advice after a bad performance review
    by /u/Public-Drag1602 (Data Science) on May 30, 2023 at 5:57 pm

    Yesterday, I had my performance review with my manager and received a 2.5 rating, which will be calibrated to a 3. In my previous reviews, I received a 4 (Exceeded expectations) and a 3.5 (Met expectations +), which will remain a 3 on my profile. I work as a Data Scientist at a well-paying company in India and have almost 2 years of experience. The ratings at my company are as follows 1 - Did not meet expectations 2 - Met some expectations 3 - Met expectations 4 - Exceeded expecations 5 - Went over and beyond expectations The reasons given for my rating were as follows: I faced a challenge during the execution of a project and reached out to my manager for help after attempting to solve it myself for a couple of days. Due to communication gaps, our discussions on the approach took some time, and I admit I should have documented things better to facilitate faster resolution. This resulted in a delay of 2-3 weeks. Eventually, we agreed on a solution, and I managed to deliver the project before the March '23 deadline. My manager mentioned that I should have been able to figure things out independently, as I had done in a previous instance. While discussing some project details with external stakeholders, I encountered a question that confused me. I informed them that I would provide an answer after reviewing the code. My manager pointed out that I should have been prepared and already had the answer. I agree that I should have been more proactive in my preparation. On a couple of occasions, I made small mistakes or overlooked corner cases when calculating metrics and reporting them in meetings. As soon as I realized these errors, I promptly informed all relevant stakeholders in the project. The feedback from other stakeholders was mostly positive, citing things like I'm curious in nature, dive very deep into a problem ask a lot of questions which are very relevant, etc. A few points of improvement were basically what was listed above, need to get my analysis correct in the first attempt During the review meeting, I discussed areas for improvement in detail. However, when I sought clarification on a few points not mentioned above, my manager did not provide clear answers. He later advised me not to take it personally and to view the feedback in the right spirit. In our monthly 1:1 meetings, my manager has emphasized the need to improve my execution speed and take on more challenging tasks. While he sometimes compliments my work, I explained that I am already giving my best despite working on multiple parallel projects, which may not be sufficient compared to my initial projects. TLDR: To summarize, despite my dedicated efforts, including working extra hours and weekends, I received a less-than-satisfactory performance review. Some of the reasons provided were unclear to me. I have made minor mistakes, but nothing major (at least from my perspective). This experience has made it challenging for me to stay motivated and has led me to question my suitability for the role. I am also unsure how to seek clarification on future tasks without risking my manager's dissatisfaction, as I believe this issue may arise again in my next review. I am contemplating whether it is worth going above and beyond to prove myself or if I should focus on updating my resume, start working on leetcode/data science questions, basically exploring other opportunities. While I definitely do not enjoy working with my manager here (felt this way since the disagreement about the project), I certainly don't want to quit without an other job lined up, given the situation of the current market. There's been no talk of a PIP, so I guess I'll be safe for the next 6 months. However, I'm not sure how much of a big difference I can make Any suggestions would be greatly appreciated. submitted by /u/Public-Drag1602 [link] [comments]

  • [D] Hand-crafted energy function for (generative) energy-based model
    by /u/thanrl (Machine Learning) on May 30, 2023 at 5:53 pm

    If I have differentiable functions that can calculated a "distance vector" between two images, can I use this hand-crafted "distance vector" to define an energy-based generative model? Has this been attempted in ML? Thanks in advance for pointers. submitted by /u/thanrl [link] [comments]

  • Should we use regression to estimate treatment effect in randomized experiment?
    by /u/aggis_husky (Data Science) on May 30, 2023 at 5:25 pm

    Let's just assume the simplest case where we have a completely randomized experiment. We want to estimate the treatment effect on revenue (Y). The usual estimator is mean(Y)_{t} - mean(Y)_{c}. This is the same as fitting the model Y = b_0 + b_t x Indicator. b_t is unbiased because of assumption completely randomized. The error is uncorrelated with the treatment assignment. Now my question is why don't we add other independent variables to the model? So long as the variables are 1. uncorrelated with the treatment assignment, 2. greatly reduced residuals, 3, not a collider, adding variables to improve the fit of the model should reduce the variance of the estimator b_t without introducing bias. To me it seems like a no-brainer. Any catch here? ​ Thanks. submitted by /u/aggis_husky [link] [comments]

  • [Project] recommend me a python algo for text based keyword extraction
    by /u/BigMickDo (Machine Learning) on May 30, 2023 at 5:22 pm

    so I'm not a DS/MLE or anything, so not very technical, but I do work with data. I'm looking to scrape job posts (few thousand or something), get their descriptions, and extract the keyword to optimize my resume for ATS. do you have any recommendation for something similar like this? ​ I did something similar last year or year before, IIRC i tried few things like RAKE and something similar named, but ended up using a lib called adv tools or advertising tools. I think what I did before was remove stop words, extract root words, tried different settings (between one and four words). I ended up settling for 3 word match up based on what i saw from the top 100 results for each group, then manually cleaned up the keywords. submitted by /u/BigMickDo [link] [comments]

  • recommend me a python algo for text based keyword extraction
    by /u/BigMickDo (Data Science) on May 30, 2023 at 5:21 pm

    so I'm not a DS/MLE or anything, so not very technical, but I do work with data. I'm looking to scrape job posts (few thousand or something), get their descriptions, and extract the keyword to optimize my resume for ATS. do you have any recommendation for something similar like this? ​ I did something similar last year or year before, IIRC i tried few things like RAKE and something similar named, but ended up using a lib called adv tools or advertising tools. I think what I did before was remove stop words, extract root words, tried different settings (between one and four words). I ended up settling for 3 word match up based on what i saw from the top 100 results for each group, then manually cleaned up the keywords. submitted by /u/BigMickDo [link] [comments]

  • How does your team/squad/tribe organize their projects?
    by /u/ddponwheels (Data Science) on May 30, 2023 at 5:16 pm

    I work in an ML/CV team and would like to learn more about how ML/CV/DS teams manage their projects within the team. we currently use Kanban, but it has been somewhat inefficient, as it focuses too much on the stages of a product and less on the research and development processes... How does your team organize and manage the flow of projects? submitted by /u/ddponwheels [link] [comments]

  • [D] Overfitting on small GPT datasets
    by /u/rwill128 (Machine Learning) on May 30, 2023 at 4:17 pm

    I've recently cloned NanoGPT and trained a few character-level models on the Shakespeare dataset. The process of looking at these last few runs in WandB eventually got me thinking about overfitting in GPT models in general, and how it interacts with two things: the temperature setting during text generation, and also the weaknesses of LLMs when it comes to hallucinations, arithmetic, and rigorous fact-based reasoning. I don't know how to run experiments for some of these ideas yet, but I'm thinking about it, and I'd like to hear about any papers that might be related. --- First of all, how do the occurrence of hallucinations in a GPT model change if you allow overfitting on a dataset? It seems like it could reduce its occurrence, because the model has "memorized" various features of the dataset. So if the dataset contains factual information, that would be more likely to be preserved. I have the same kind of questions regarding temperature and how it affects the frequency of hallucinations. --- To give an example of what I mean, let's look at URLs generated by ChatGPT. If you're not using the web-browsing extension, they are basically 100% of the time going to be wrong, as in they won't actually exist if you try to visit them. They just look like plausible URLs, which is a consequence of the fact that the model learned what URLs look like, but it didn't memorize any specifics URLs, right? My hypothesis (and maybe URLs are actually a great idea for how to test this...) is that if the model were allowed to overfit on certain parts of the dataset (namely URLs in this case) and if the temperature were selectively controlled so that it's extremely low when the model knows a URL is currently being generated, then you could get an GPT model that can mostly function as a normal GPT model, but can also precisely transcribe certain things that were present in its dataset. This would be enormously useful if the model could learn when precise character-by-character recall is important, such as when it's generating URLs, and when it's not important and it can be more "creative", such as when it's describing a general idea or concept. Any thoughts on this? If the theory is actually true that overfitting and low temperature can help generate real URLs that appear in the dataset, then perhaps you could more broadly apply this to other areas, and discover techniques that allow a GPT model to know when it needs to be precise and when it doesn't. submitted by /u/rwill128 [link] [comments]

  • Thoughts on handling skewed data in a random forest model?
    by /u/NDVGuy (Data Science) on May 30, 2023 at 4:14 pm

    Hi all, I'm working on my first major project in my DS role and am running into trouble. I have a decently large dataset with about 30 features that I'm using RandomForestRegressor with. After doing a stratified shuffle split based on an unbalanced feature, removing a few major outliers, one hot encoding my categorical features, and tuning my hyperparameters with GridSearchCV, my best R-squared value is very low (about 0.20). Preliminary projects suggest that there should be a much stronger relationship here, so I'm trying to go through some troubleshooting steps to see if I can improve things. When looking at histograms and box plots, I noticed that many of my numeric features and my target aren't normally distributed, and are instead heavily skewed. How does this impact my random forest model? Should I do some sort of transformation on these columns? If so, how will this impact my ability to get accurate estimations from my model later on? Any additional troubleshooting advice is also welcome. Thanks a ton in advance for any thoughts here. submitted by /u/NDVGuy [link] [comments]

  • Talk To Your CSV: How To Visualize Your Data With Langchain And Streamlit
    by /u/gaodalie (Data Science) on May 30, 2023 at 4:03 pm

    In this tutorial, I will show you how to use Langchain and Streamlit to analyze CSV files, https://medium.com/p/5cb8a0db87e0 submitted by /u/gaodalie [link] [comments]

  • [R] Automated Checks for Violations of Independent and Identically Distributed (IID) Assumption
    by /u/jonas__m (Machine Learning) on May 30, 2023 at 4:00 pm

    Hey Redditors! Before modeling a dataset, do you remember to check if it seems IID? The non-IID data on the right were collected in such a way that violates the Independent and Identically Distributed (IID) assumption. Distribution drift and interactions between datapoints (autocorrelation) are common violations of the Independent and Identically Distributed (IID) assumption which make data-driven inference untrustworthy. I present an automated check for such IID violations that you can quickly run on any {numeric, image, text, audio, etc.} dataset! My method helps you understand: does the order in which my data were collected matter? When the answer is yes, you must take special precautions in modeling to ensure proper generalization from data violating the IID property. Almost all of standard Machine Learning and Statistics relies on this fundamental property! I just published a paper detailing this non-IID check and open-sourced its code in the cleanlab package — just one line of code will check for this and many other types of issues in your dataset. Don’t let such issues mess up your data analysis, use automated software to detect them before you dive into modeling! submitted by /u/jonas__m [link] [comments]

  • [D] Is there any way to filter searches by metadata over current vector DBs like Pinecone?
    by /u/Galbatorix123 (Machine Learning) on May 30, 2023 at 3:56 pm

    So, I'm thinking of building an application that enables organizations to query their documents with natural language. The basic solution would be to upload all documents to the vector DB and then query for the nearest neighbors. The issue is that not all users in the organization have access to all documents. Ideally, we can limit the search over documents from the vector DB based on the role of the user. Is this possible? Are there any vector DB providers that allow filtering over metadata? Thanks! submitted by /u/Galbatorix123 [link] [comments]

  • [D] Understanding frequency penalty, presence penalty, repetition penalty
    by /u/dualtree (Machine Learning) on May 30, 2023 at 3:53 pm

    I'm using Llama for a chatbot that engages in dialogue with the user. However, I notice that it often generates replies that are very similar to messages it has sent in the past (which appear in the message history as part of the prompt). Will increasing the frequency penalty, presence penalty, or repetition penalty help here? My understanding is that they reduce repetition within the generated text (aka avoid repeating a word multiple times), but they don't prevent repeating words or phrases that appear in the prompt. Is that correct? If not, then which of the three penalties should be increased? Thanks so much. submitted by /u/dualtree [link] [comments]

  • Unusual interview question
    by /u/Woznyyyy (Data Science) on May 30, 2023 at 3:44 pm

    Hi, today i've had a last round interview with my (hopefully) future manager. The job is a data science internship in a bank. The question was as follows: "Let's say you have 250 variables which you can use to construct a model. However, you will have to explain why you chose these variables to your colleague, who is going to decide whether it goes into production or not. How many do you keep?" And that's it! Nothing on the context, data, nature of the problem or even the aformentioned colleague. I believe it wasn't a question regarding my knowledge on data pre-processing and feature selection, because we have discussed these pretty intensively in the questions before. Nevertheless, I told him about these once more and said that it varies from case to case. In the end, he still asked for an estimate, so I said "50 maximum, 20-25 optimally", and argued that a model with more variables would probably be tough to interpret and that explaining that many to my colleague would probably take way too long. Overall, I've got a feeling I did pretty well in the interview. This question is the only thing I'm uncertain of. From what I heard and saw, this wasn't meant to reveal my way of thinking etc. He simply wanted to know the estimated value. What do you guys think was the purpose of this? What's the correct answer? Do you think I replied well? submitted by /u/Woznyyyy [link] [comments]

  • [D] What does the process for building and maintaining a knowledge graph look like?
    by /u/biscuits-and-jamies (Machine Learning) on May 30, 2023 at 3:12 pm

    What does a knowledge graph process look like? I feel like learning about a functional, purpose-built knowledge graph - where it comes from, the gist of how it was built, and how it is being maintained - would go a long way to provide clarity on what can be done with a knowledge graph. -------------------------------- Over the past two weeks, I worked through a collection tutorials and training videos (primarily Stardog) - learning the vocabulary, high-level uses, and interacting with knowledge graph libraries UI, learning the basics of Turtle and SPARQL language syntax going through examples and testing things. All great stuff. I feel comfortable with the main themes of knowledge graphs. From what I gathered, there appears to be two ways to build a knowledge graph: (1) manually (e.g., creating the data, loading the data directly or via virtualization, defining classes and properties, imposing constraints, etc.) or (2) programmatically (e.g., creating data by scraping text with NLP models, converting extracted data for subject-predict-object syntax, creating object properties programmatically (I'm really not sure how people do this, GNNs?) and uploading it to a knowledge graph). How both of those processes in the real world seem opaque to me. Here are two resources I intend to start with: [0] https://allenai.org/demos and [1] https://link.springer.com/chapter/10.1007/978-3-319-25010-6_12 submitted by /u/biscuits-and-jamies [link] [comments]

  • Onboarding/Roadmapping Advice for New Analytics Manager
    by /u/whispertoke (Data Science) on May 30, 2023 at 2:49 pm

    I recently started a position as analytics manager at a small non-tech company. This is my first true leadership role and I have a lot of leeway as far as what direction the company should take to do the whole “data thing.” In my first couple months I want to map out systems, data collection/movement/schema, departmental workflows, existing reports— your basic lay of the land type of stuff. Does anyone have advice about how to approach this processes strategically, tools/methods recommended to stay organized or general guidance for how to think about this? Thanks, submitted by /u/whispertoke [link] [comments]

  • What is DataOps?
    by /u/Polyseam (Data Science) on May 30, 2023 at 2:22 pm

    submitted by /u/Polyseam [link] [comments]

  • High scoring training set, but low scoring with cross-validation. Is this from overfitting?
    by /u/NDVGuy (Data Science) on May 30, 2023 at 2:19 pm

    I'm fairly new to working with this type of problem and am hoping to get some advice beyond what I was able to find from searching online. I'm modeling on a large dataset using random forest. I get strong evaluation scores (R-squared of ~0.85) on a preliminary run on the training set with no hyperparameter tuning, however, when I introduce cross-validation and hyperparameter tuning, I end up with something like 0.20 for my best model. My guess is that this indicates overfitting, but are there any other issues that I may be concerned with? My understanding is that overfitting is much less common in random forest models-- with my dataset being pretty large, would this just indicate that the data are highly noisy? Is there a 'best approach' to assessing/solving this issue? Thanks in advance for any advice more experienced members are able to give. submitted by /u/NDVGuy [link] [comments]

  • How to build a prediction model where there is negligible relation between the target variable and independent variables?
    by /u/ilovekungfuu (Data Science) on May 30, 2023 at 2:17 pm

    There dataset is large enough. Very mild correlation. submitted by /u/ilovekungfuu [link] [comments]

  • Should I specialize or look towards generalizing as an early career data scientist?
    by /u/ColickingSeahorse (Data Science) on May 30, 2023 at 2:13 pm

    Just came up on the year mark for my first data scientist role and I’m thinking about what to do about my career long-term. Some context: I work in finance in a data science team that’s heavily focused on experimental design and causal inference. It’s a bit of a weird role because my job is more around enforcing standards for experimental design and measurement, vetting and analyzing causal inference use cases and scoping work for novel methods in causal inference and measurement. I wrote virtually no sql in my job. The good: - get some really good experience in designing good experiments and auditing the execution of the experimental design from end to end - soft skills development. Experimental designs need to be socialized which requires a lot of listening to precisely understand the business problem and communicating how the experimental design answers the business problem - freedom to explore and work on projects that use novel methods (likely won’t go anywhere besides impressing my boss but good experience nonetheless) - great mentorship, my manager is a PhD statistician who has a ton of exp in experimental design and the director of the team is a PhD statistician as well so the value of the work we do is well understood and within the business - recognizable name brand on my resume - good pay for early career role in a non-tech industry The bad: - no sql exposure, all the data comes from other analysts - no dashboard dev work - no opportunities for modeling in the predictive sense (we do use statistical modeling techniques but they’re typically in service of causal inference work which is quite different than traditional modeling) - no coding best practices (no one uses git, we don’t have a repo, just notebooks sent over email) - skill ceiling in experimental design. Our problems aren’t as complicated and interesting as what’s encountered in the tech industry Ideally I’d like to have a long career in the field. I love experimental design (have a prior PhD in engineering and worked in my industry for a couple years before becoming a DS) and causal inference, it’s a fun field. My goals are to get a role in the tech industry and work on more interesting problems either within or adjacent to my sub field. I do some pro-bono consulting work for nonprofits on the side that give me more exposure to modeling but obviously the strength of this is limited relative to the strength of doing work problems in this. I’m worried however that the negatives of my role and specializing is really going to limit my career growth and an not sure if I should spend time and find opportunities to shore these up. Would love to hear from others who have experience on what they think. submitted by /u/ColickingSeahorse [link] [comments]

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

What are some good datasets for Data Science and Machine Learning?

Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!

Microsoft Azure AZ900 Certification and Training

error: Content is protected !!