AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
What is the Best Machine Learning Algorithms for Imbalanced Datasets?
In machine learning, imbalanced datasets are those where one class heavily outnumbers the others. This can be due to the nature of the problem or simply because more data is available for one class than the others. Either way, imbalanced datasets can pose a challenge for machine learning algorithms. In this blog post, we’ll take a look at which machine learning algorithms are best suited for imbalanced datasets and why they tend to perform better than others.
For example, in a binary classification problem, if there are 100 observations, and only 10 of them are positive (the rest are negatives), then we say that the dataset is imbalanced. The ratio of positive to negative cases is 1:10.
There are a few reasons why some machine learning algorithms tend to perform better on imbalanced datasets than others. First, certain algorithms are designed to handle imbalanced datasets. Second, some algorithms are more robust to outliers, which can be more common in imbalanced datasets. And third, some algorithms are better able to learn from a limited amount of data, which can be an issue when one class is heavily outnumbered by the others.
Some of the best machine learning algorithms for imbalanced datasets include:
– Support Vector Machines (SVMs),
– Decision Trees,
– Random Forests,
– Naive Bayes Classifiers,
– k-Nearest Neighbors (kNN),
Of these, SVMs tend to be the most popular choice as they are specifically designed to handle imbalanced datasets. SVMs work by finding a hyperplane that maximizes the margin between the two classes. This helps to reduce overfitting and improve generalization. Decision trees and random forests are also popular choices as they are less sensitive to outliers than other algorithms such as linear regression. Naive Bayes classifiers are another good choice as they are able to learn from a limited amount of data. kNN is also a good choice as it is not sensitive to outliers and is able to learn from a limited amount of data. However, it can be computationally intensive for large datasets.
There are two main types of machine learning algorithms: supervised and unsupervised. Supervised algorithms tend to perform better on imbalanced datasets than unsupervised algorithms. In this blog post, we will discuss why this is so and look at some examples.
Supervised Algorithms
Supervised algorithms are those where the target variable is known. In other words, we have training data where the correct answers are already given. The algorithm then learns from this data and is able to generalize to new data. Some examples of supervised algorithms are regression and classification.
Unsupervised Algorithms
Unsupervised algorithms are those where the target variable is not known. With unsupervised algorithms, we only have input data, without any corresponding output labels. The algorithm has to learn from the data itself without any guidance. Some examples of unsupervised algorithms are clustering and dimensionality reduction.
Why Supervised Algorithms Perform Better on Imbalanced Datasets
The reason why supervised algorithms perform better on imbalanced datasets is because they can learn from the training data which cases are more important. With unsupervised algorithms, all data points are treated equally, regardless of whether they are in the minority or majority class.
For example, in a binary classification problem with an imbalanced dataset, let’s say that we want to predict whether a customer will default on their loan payment or not. We have a training dataset of 1000 customers, out of which only 100 (10%) have defaulted on their loan in the past.
If we use a supervised algorithm like logistic regression, the algorithm will learn from the training data that defaulting on a loan is rare (since only 10% of cases in the training data are Positive). This means that it will be more likely to predict correctly that a new customer will not default on their loan (since this is the majority class in the training data).
However, if we use an unsupervised algorithm like k-means clustering, all data points will be treated equally since there is no target variable to guide the algorithm. This means that it might incorrectly cluster together customers who have defaulted on their loans with those who haven’t since there is no guidance provided by a target variable.
Conclusion:
In conclusion, supervised machine learning algorithms tend to perform better on imbalanced datasets than unsupervised machine learning algorithms because they can learn from the training data which cases are more important.
Some machine learning algorithms tend to perform better on highly imbalanced datasets because they are designed to deal with imbalance or because they can learn from both classes simultaneously. If you are working with a highly imbalanced dataset, then you should consider using one of these algorithms.
Thanks for reading!
How are machine learning techniques being used to address unstructured data challenges?
Machine learning techniques are being used to address unstructured data challenges in a number of ways:
- Natural language processing (NLP): NLP algorithms can be used to extract meaningful information from unstructured text data, such as emails, documents, and social media posts. NLP algorithms can be trained to classify text data, identify key terms and concepts, and extract structured data from unstructured text.
- Image recognition: Machine learning algorithms can be used to analyze and classify images, enabling the automatic identification and classification of objects, people, and other elements in images. This can be useful for tasks such as image tagging and search, as well as for applications such as security and surveillance.
- Audio and speech recognition: Machine learning algorithms can be used to analyze and classify audio data, enabling the automatic transcription and translation of spoken language. This can be useful for tasks such as speech-to-text transcription, as well as for applications such as call center automation and language translation.
- Video analysis: Machine learning algorithms can be used to analyze and classify video data, enabling the automatic detection and classification of objects, people, and other elements in video. This can be useful for tasks such as video tagging and search, as well as for applications such as security and surveillance.
Overall, machine learning techniques are being used in a wide range of applications to extract meaningful information from unstructured data, and to enable the automatic classification and analysis of data in a variety of formats.
How is AI and machine learning impacting application development today?
Artificial intelligence (AI) and machine learning are having a significant impact on application development today in a number of ways:
- Enabling new capabilities: AI and machine learning algorithms can be used to enable applications to perform tasks that would be difficult or impossible for humans to do. For example, AI-powered applications can be used to analyze and classify large amounts of data, or to automate complex decision-making processes.
- Improving performance: AI and machine learning algorithms can be used to optimize the performance of applications, making them faster, more efficient, and more accurate. For example, machine learning algorithms can be used to improve the accuracy of predictive models, or to optimize the performance of search algorithms.
- Streamlining development: AI and machine learning algorithms can be used to automate various aspects of application development, such as testing, debugging, and deployment. This can help to streamline the development process and reduce the time and resources needed to build and maintain applications.
- Enhancing user experiences: AI and machine learning algorithms can be used to enhance the user experience of applications, by providing personalized recommendations, recommendations, or by enabling applications to anticipate and respond to the needs and preferences of users.
Overall, AI and machine learning are having a significant impact on application development today, and they are likely to continue to shape the way applications are built and used in the future.
How will advancements in artificial intelligence and machine learning shape the future of work and society?
Advancements in artificial intelligence (AI) and machine learning are likely to shape the future of work and society in a number of ways. Some potential impacts include:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals
- Automation: AI and machine learning algorithms can be used to automate tasks that are currently performed by humans, such as data entry, customer service, and manufacturing. This could lead to changes in the types of jobs that are available and the skills that are in demand, as well as to increased productivity and efficiency.
- Job displacement: While automation may create new job opportunities, it could also lead to job displacement, particularly for workers in industries that are more susceptible to automation. This could lead to social and economic challenges, including unemployment and income inequality.
- Increased efficiency: AI and machine learning algorithms can be used to optimize and streamline business processes, leading to increased efficiency and productivity. This could lead to economic growth and innovation, and could also help to reduce costs for businesses and consumers.
- Enhanced decision-making: AI and machine learning algorithms can be used to analyze large amounts of data and make more informed and accurate decisions. This could lead to improved outcomes in fields such as healthcare, finance, and education, and could also help to reduce bias and improve fairness.
Overall, the impact of AI and machine learning on the future of work and society is likely to be significant and complex, with both potential benefits and challenges. It will be important to consider and address these impacts as these technologies continue to advance and become more widely adopted.
- [D] Exploring Complex Number Representations for Word Vectors: A New Approachby /u/_mayuk (Machine Learning) on April 25, 2024 at 3:21 am
Word embeddings like Word2Vec and GloVe have revolutionized natural language processing, offering compact and dense representations of word meanings. However, these embeddings typically represent words as real-valued vectors, potentially limiting their ability to capture complex semantic relationships. In this proposal, we explore an alternative approach: representing word vectors as complex numbers. We propose converting Word2Vec or GloVe vectors into complex numbers, where the real part captures magnitude and the imaginary part encodes additional semantic information. For instance, consider the word vector Vecword=[0.2,−0.3,0.5,0.1,−0.2]. We can convert this vector into a complex number zz as follows: z=Vecword[0]+i×Vecword[1] Here, ii is the imaginary unit. The real part of the complex number represents the magnitude of the word's meaning (0.2), while the imaginary part (-0.3i) captures additional semantic nuances. This approach offers several potential advantages: Enhanced Semantic Representation: Complex numbers can capture both magnitude and phase, allowing for richer semantic representations compared to real-valued vectors. Contextual Information: By encoding semantic information in the imaginary part, we can capture contextual nuances that may be missed by traditional embeddings. Compatibility: Complex number representations can be seamlessly integrated into existing models and frameworks, offering a straightforward extension to current NLP pipelines. Exploring complex number representations for word vectors presents an exciting avenue for enhancing semantic understanding in natural language processing tasks. By leveraging the unique properties of complex numbers, we can potentially unlock deeper insights into the structure and meaning of language. This proposal aims to spark further research and experimentation in this promising direction. Join us as we delve into the fascinating world of complex semantics! submitted by /u/_mayuk [link] [comments]
- [R] French GEC datasetby /u/R-e-v-e-r-i-e- (Machine Learning) on April 25, 2024 at 12:14 am
Hi, does anyone know of a French L2 GEC dataset (that was published at a conference)? submitted by /u/R-e-v-e-r-i-e- [link] [comments]
- [D] tutorial on how to build streaming ML applicationsby /u/clementruhm (Machine Learning) on April 24, 2024 at 10:16 pm
My primary expertise is audio processing, but i believe this task happens in other domains too: running a model on chunks of infinitely long input. while for some architectures it is straightforward, it can get tedious for convolutional nets. I put together a comprehensive tutorial how to build a streaming ML applications: https://balacoon.com/blog/streaming_inference/. would be curious to learn wether its a common problem and how do people usually deal with it. because generally resources on the topic are surprisingly scarce. submitted by /u/clementruhm [link] [comments]
- [D] Why is R^2 so crazy?by /u/Cloverdover1 (Machine Learning) on April 24, 2024 at 9:40 pm
https://preview.redd.it/jpiyt4b9yhwc1.png?width=1165&format=png&auto=webp&s=95d80f8f9c9241d722717ad25215be4077d541ca Based on the MSE looks good right? But why is my R^2 starting off so negative and approaching 0? Could it be a bug in how i am calculating it? This happened after i min maxed the labels before training. This is an LSTM that is predicting runs scored for baseball games. submitted by /u/Cloverdover1 [link] [comments]
- Recall Score Increase [D]by /u/Legal_Hearing555 (Machine Learning) on April 24, 2024 at 5:38 pm
Hello Everyone, I am trying to do a small fraud detection project and i have so imbalanced dataset. I used randomundersampling because minority class is pretty small and i also tried smote or combining with smote best recall score i got, was with only randomundersampling(0.95). I thought GridsearchCV to increase it but instead of increasing, it is decreasing although i tried to make it to focus on recall score. Why this is happening? submitted by /u/Legal_Hearing555 [link] [comments]
- [D] Preserving spatial distribution of data during data splittingby /u/dr_greg_mouse (Machine Learning) on April 24, 2024 at 5:14 pm
Hello, I am trying to model nitrate concentrations in the streams in Bavaria in Germany using Random Forest model. I am using Python and primarily sklearn for the same. I have data from 490 water quality stations. I am following the methodology in the paper from LongzhuQ.Shen et al which can be found here: https://www.nature.com/articles/s41597-020-0478-7 I want to split my dataset into training and testing set such that the spatial distribution of data in both sets is identical. The idea is that if data splitting ignores the spatial distribution, there is a risk that the training set might end up with a concentration of points from densely populated areas, leaving out sparser areas. This can skew the model's learning process, making it less accurate or generalizable across the entire area of interest. sklearn train_test_split just randomly divides the data into training and testing sets and it does not consider the spatial patterns in the data. The paper I mentioned above follows this methodology: "We split the full dataset into two sub-datasets, training and testing respectively. To consider the heterogeneity of the spatial distribution of the gauge stations, we employed the spatial density estimation technique in the data splitting step by building a density surface using Gaussian kernels with a bandwidth of 50 km (using v.kernel available in GRASS GIS33) for each species and season. The pixel values of the resultant density surface were used as weighting factors to split the data into training and testing subsets that possess identical spatial distributions." I want to follow the same methodology but instead of using grass GIS, I am just building the density surface myself in Python. I have also extracted the probability density values and the weights for the stations. (attached figure) Now the only problem I am facing is how do I use these weights to split the data into training and testing sets? I checked there is no keyword in the sklearn train_test_split function that can consider the weights. I also went back and forth with chat GPT 4 but it is also not able to give me a clear answer. Neither did I find anything concrete on the internet about this. Maybe I am missing something. Is there any other function I can use to do this? Or will I have to write my own algorithm to do the splitting? In case of the latter, can you please suggest me the approach so I can code it myself? In the attached figure you can see the location of the stations and the probability density surface generated using the kernel density estimation method (using Gaussian kernels). Also attaching a screenshot of my dataframe to give you some idea of the data structure. (all columns after longitude ('lon') column are used as features. the NO3 column is used as the target variable.) I will be grateful for any answers. Probability density surface generated using the kernel density estimation method with gaussian kernels. the dataset I am using to model the nitrate concentrations submitted by /u/dr_greg_mouse [link] [comments]
- [N] Snowflake releases open (Apache 2.0) 128x3B MoE modelby /u/topcodemangler (Machine Learning) on April 24, 2024 at 4:45 pm
Links: https://www.snowflake.com/blog/arctic-open-efficient-foundation-language-models-snowflake/ https://replicate.com/snowflake/snowflake-arctic-instruct submitted by /u/topcodemangler [link] [comments]
- [D] Why would such a simple sentence break an LLM?by /u/michael-relleum (Machine Learning) on April 24, 2024 at 3:59 pm
This is a prompt I entered into MS Copilot (GPT4 Turbo). It's in german but it just means "Would there be any disadvantages if I took the full bath first?"), so this can't be another SolidGoldMagikarp or similar, because the words clearly were in both tokenizer and training vocab. Why would such a simple sentence cause this? Any guesses? (also tried with Claude Opus and LLama 3 70b, which worked fine) https://preview.redd.it/9x6mva7b6gwc1.png?width=1129&format=png&auto=webp&s=bb6ac52d1c52d981161e8a864c5d1dd3794ca392 submitted by /u/michael-relleum [link] [comments]
- [R] Speaker diarizationby /u/anuragrawall (Machine Learning) on April 24, 2024 at 3:01 pm
Hi All, I am working on a project where I want to create speaker-aware transcripts from audios/videos, preferably using open-source solutions. I have tried so many approaches but nothing seems to work good enough out of the box. I have tried: whisperX: https://github.com/m-bain/whisperX (uses pyannote) whisper-diarization: https://github.com/MahmoudAshraf97/whisper-diarization (uses Nemo) AWS Transcribe AssemblyAI API Picovoice API I'll need to dig deeper and understand what's causing the incorrect diarization but I am looking for suggestions to improve speaker diarization. Please reach out if you have worked in this area and have had any success. Thanks! submitted by /u/anuragrawall [link] [comments]
- [R] I made an app to predict ICML paper acceptance from reviewsby /u/Lavishness-Mission (Machine Learning) on April 24, 2024 at 12:23 pm
https://www.norange.io/projects/paper_scorer/ A couple of years ago, u/programmerChilli analyzed ICLR 2019 reviews data and trained a model that rather accurately predicted acceptance results for NeurIPS. I've decided to continue this analysis and trained a model (total ~6000 parameters) on newer NeurIPS reviews, which has twice as many reviews compared to ICLR 2019. Additionally, review scores system for NeurIPS has changed since 2019, and here is what I've learned: 1) Both conferences consistently reject nearly all submissions scoring <5 and accept those scoring >6. The most common score among accepted papers is 6. An average rating around 5.3 typically results in decisions that could go either way for both ICML and NeurIPS, suggesting that ~5.3 might be considered a soft threshold for acceptance. 2) Confidence scores are less impactful for borderline ratings such as 4 (borderline reject), 5 (borderline accept), and 6 (weak accept), but they can significantly affect the outcome for stronger reject or accept cases. For instance, with ratings of [3, 5, 6] and confidences of [*, 4, 4], changing the "Reject" confidence from 5 to 1 shifts the probabilities from 26.2% - 31.3% - 52.4% - 54.5% - 60.4%, indicating that lower confidence in this case increases your chances. Conversely, for ratings [3, 5, 7] with confidences [4, 4, 4], the acceptance probability is 31.3%, but it drops to 28.1% when the confidence changes to [4, 4, 5]. Although it might seem counterintuitive, a confidence score of 5 actually decreases your chances. One possible explanation is that many low-quality reviews rated 5 are often discounted by the Area Chairs (ACs). Hope this will be useful, and thanks to u/programmerChilli for the inspiration! I also discussed this topic in a series of tweets. submitted by /u/Lavishness-Mission [link] [comments]
- [R] SpaceByte: Towards Deleting Tokenization from Large Language Modeling - Rice University 2024 - Practically the same performance as subword tokenizers without their many downsides!by /u/Singularian2501 (Machine Learning) on April 24, 2024 at 11:42 am
Paper: https://arxiv.org/abs/2404.14408 Github: https://github.com/kjslag/spacebyte Abstract: Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures.Paper: https://arxiv.org/abs/2404.14408Github: https://github.com/kjslag/spacebyteAbstract:Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures. https://preview.redd.it/v1xo6g1gzewc1.jpg?width=1507&format=pjpg&auto=webp&s=f9d415307b60639fa67e8a54c8769fa5a6c10f04 https://preview.redd.it/edvqos1gzewc1.jpg?width=1654&format=pjpg&auto=webp&s=f91c8727017e1a1bc7b80bb77a8627ff99182607 https://preview.redd.it/fe6z6i1gzewc1.jpg?width=1181&format=pjpg&auto=webp&s=24d955f30b8ca3eaa7c527f3f40545ed493f789c submitted by /u/Singularian2501 [link] [comments]
- [D] Keeping track of models and their associated metadata.by /u/ClearlyCylindrical (Machine Learning) on April 24, 2024 at 10:20 am
I am starting to accumulate a large number of models for a project I am working on, many of these models are old which I am keeping for archival sake, and many are fine tuned from other models. I am wondering if there is an industry standard way of dealing with this, in particular I am looking for the following: Information about parameters used to train the model Datasets used to train the model Other metadata about the model (i.e. what objects an object detection model trained for) Model performance Model lineage (What model was it fine tuned from) Model progression (Is this model a direct upgrade from some other model, such as being fine tuned from the same model but using better hyper parameters) Model source (Not sure about this, but I'm thinking some way of linking the model to the python script which was used to train it. Not crucial but something like this would be nice) Are there any tools of services which could help be achieve some of this functionality? Also, if this is not the sub for this question could I get some pointers in the correct direction. Thanks! submitted by /u/ClearlyCylindrical [link] [comments]
- [D] Deploy the fine-tuned Mistral 7B model using the Hugging Face libraryby /u/Future-Outcome3167 (Machine Learning) on April 24, 2024 at 9:31 am
I followed the tutorial provided at https://www.datacamp.com/tutorial/mistral-7b-tutorial and now seek methods to deploy the model for faster inference using Hugging Face and Gradio. Could anyone please share a guide notebook or article for reference? Any help would be appreciated. submitted by /u/Future-Outcome3167 [link] [comments]
- [D] Transkribus vs Tesseract for Handwritten Text Recognition (HTR)by /u/Pretty_Instance4483 (Machine Learning) on April 24, 2024 at 6:15 am
I am looking for a HTR tool with the best accuracy and preferably not pricy (obviously). From my research, it seems that Transkribus was the most mentioned platform with good reviews. As I would need to convert images to text regularly I would need to pay the subscription. So I am wondering if I could use the Tesseract and/or TensorFlow Python library to achieve the same result for free. Would using Tesseract/TensorFlow be less accurate rather than using Transkribus? I learned only the basics of Machine Learning (TensorFlow, scikit-learn, keras), so I might have not enough knowledge to see the difference between the two solutions. Or is training Tesseract/TensorFlow would be challenging? submitted by /u/Pretty_Instance4483 [link] [comments]
- [D] How researcher think of inductive bias when thinking of creating new/improving foundational models?by /u/binny_sarita (Machine Learning) on April 24, 2024 at 2:36 am
I am undergradute student learning machine learning. What I got to know while reading few papers that we try to reduce search space by imposing inductive bias in machine learning models. And the success in creating useful models comes when inductive bias matches with the underlying data. In heriarchical models like NVAE how they instilled inductive bias by specifing the way data gets computed? (I thinks it's called algorithmic bias, not sure though) But how people think such inductive bias will be helpful, what is step by step procedure they go through to insist such inductive bias. I took a lot of class in machine learning and statistics but didn't got any lectures explaing such stuff. Did I missed any course/lecture? Please provide my with papers/lectures/talks related to it if possible Thankyou submitted by /u/binny_sarita [link] [comments]
- [R] Generalized Contrastive Learning for Multi-Modal Retrieval and Rankingby /u/Jesse_marqo (Machine Learning) on April 23, 2024 at 11:07 pm
Generalization of the popular training method of CLIP to be better suited for search and recommendations. Paper: https://arxiv.org/pdf/2404.08535.pdf Github: https://github.com/marqo-ai/GCL Generalises CLIP: Use any number of text and/or images to represent documents. Better text understanding by having both inter- and intra-modal losses. Can encode rank/importance/relevance, a.k.a “rank-tune”. Works with pretrained, text, CLIP models. Can learn uni- or multi-vector representations for documents. Works with binary and Matryoshka methods. Open source 10M row multi-modal dataset with 100k queries and ~5M products. Why? The prevailing methods for training embedding models are largely disconnected from the end use-case (like search), the vector database, the requirements of users, and a lack of representative datasets for development and evaluation, particularly when multiple modalities and ranking is involved. Limitations of current embedding models for vector search Although vector search is very powerful and enables searching across just about any data, the current methods have some limitations. The prevailing methods for training embedding models are largely disconnected from the end use-case (like search), the vector database, and the requirements of users. This means that a lot of the potential of vector search is being unmet. Some of the current challenges are described below. Restricted to using a single piece of information to represent a document Current models encode and represent one piece of information with one vector. The reality is that often there are multiple pieces of pertinent information for a document that may span multiple modalities. For example, in product search there may be a title, description, reviews, and multiple images, each with its own caption. GCL generalises embedding model training to use as many pieces of information as is desired. No notion of rank when dealing with degenerate queries When there are degenerate queries - multiple results that satisfy some criteria of relevance - the ordering of the results is only ever learned indirectly from the many binary relationships. In reality, the ordering of results matters, even for first stage retrieval. GCL allows for the magnitude of query-document specific relevance to be encoded in the embeddings and improves ranking of candidate documents. Poor text understanding when using CLIP like methods For multi-modal models like CLIP, these are trained to only work from image to text (and vice versa). The text-text understanding is not as good as text only models due to the text-text relationships being learned indirectly through images. For many applications, having both inter- and intra-modality understanding is required. GCL allows for any combination of inter- and intra-modal understanding by directly optimizing for this. Lack of representative datasets to develop methods for vector search In developing GCL, it became apparent there was a disconnect with publicly available datasets for embedding model training and evaluation for real-world use cases. Existing benchmarks are typically text only or inter-modal only and focus on the 1-1 query-result paradigm. Additionally, existing datasets have limited notions of relevance, with the majority encoding it as a binary relationship while several use (up-to) a handful of discrete categorizations often on the test set only. This differs from a typical real-world use cases where relevance can be both hard binary relationships or come from continuous variables. To help with this we compiled a dataset of 10M (ranked) product-query pairs, across ~100k queries, nearly 5M products, and four evaluation splits (available here). submitted by /u/Jesse_marqo [link] [comments]
- [D] Practical uses of AI inside companiesby /u/CJSF (Machine Learning) on April 23, 2024 at 10:25 pm
How are people using AI inside companies (startups -> FAANG) to improve operations and processes? There is so much talk about leveraging LLM’s and GenAI but I’m struggling to find real concrete examples that are successful, beyond what comes up in a google search. The following areas come to mind first but this list isn’t exhaustive of course: Design (and handoff) Engineering Customer Support Sales Documentation Marketing What’s worked or shown promise? What hasn’t worked? submitted by /u/CJSF [link] [comments]
- Meta does everything OpenAI should be [D]by /u/ReputationMindless32 (Machine Learning) on April 23, 2024 at 10:03 pm
I'm surprised (or maybe not) to say this, but Meta (or Facebook) democratises AI/ML much more than OpenAI, which was originally founded and primarily funded for this purpose. OpenAI has largely become a commercial project for profit only. Although as far as Llama models go, they don't yet reach GPT4 capabilities for me, but I believe it's only a matter of time. What do you guys think about this? submitted by /u/ReputationMindless32 [link] [comments]
- [D] Speech to Text Word Level Timestamps Accuracy Issueby /u/Mindless-Ordinary485 (Machine Learning) on April 23, 2024 at 7:18 pm
I've had a lot of success with Whisper when it comes to transcriptions, but word level timestamps seems to be slightly inaccurate. From my understanding ("Whisper cannot provide reliable word timestamps, because the END-TO-END models like Transformer using cross-entropy training criterion are not designed for reliably estimating word timestamps." https://www.youtube.com/watch?v=H576iCWt1Co&t=192s) For my use case, I need precise word level timestamps, because I'm doing audio insertion after specific words. This becomes problematic when I do an insertion and the back part of a word ends up on the other side. Example: Given an original audio file with speech that has been transcribed, If I want to insert a clip at the end of the word "France", and according to the timestamp, the word "France" starts at 19.26 and ends at 19.85, I will insert the clip at 19.85. However, if the actual end of France is at 19.92, then when I insert the laugher at 19.85, I will here the remaining "France", likely "ce" (0.07), at the end. I'm curious if anyone has been posed with a similar problem and what they did to get around this? I've experimented with a few open source variations of whisper, but still running into that issue. submitted by /u/Mindless-Ordinary485 [link] [comments]
- [R] Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometryby /u/SeawaterFlows (Machine Learning) on April 23, 2024 at 7:11 pm
Paper: https://arxiv.org/abs/2404.06405 Code: https://huggingface.co/datasets/bethgelab/simplegeometry Abstract: Proving geometric theorems constitutes a hallmark of visual reasoning combining both intuitive and logical skills. Therefore, automated theorem proving of Olympiad-level geometry problems is considered a notable milestone in human-level automated reasoning. The introduction of AlphaGeometry, a neuro-symbolic model trained with 100 million synthetic samples, marked a major breakthrough. It solved 25 of 30 International Mathematical Olympiad (IMO) problems whereas the reported baseline based on Wu's method solved only ten. In this note, we revisit the IMO-AG-30 Challenge introduced with AlphaGeometry, and find that Wu's method is surprisingly strong. Wu's method alone can solve 15 problems, and some of them are not solved by any of the other methods. This leads to two key findings: (i) Combining Wu's method with the classic synthetic methods of deductive databases and angle, ratio, and distance chasing solves 21 out of 30 methods by just using a CPU-only laptop with a time limit of 5 minutes per problem. Essentially, this classic method solves just 4 problems less than AlphaGeometry and establishes the first fully symbolic baseline strong enough to rival the performance of an IMO silver medalist. (ii) Wu's method even solves 2 of the 5 problems that AlphaGeometry failed to solve. Thus, by combining AlphaGeometry with Wu's method we set a new state-of-the-art for automated theorem proving on IMO-AG-30, solving 27 out of 30 problems, the first AI method which outperforms an IMO gold medalist. submitted by /u/SeawaterFlows [link] [comments]
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech
Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....
List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic
- Circadian rhythms can influence drugs’ effectivenessby /u/Sariel007 on April 24, 2024 at 10:33 pm
submitted by /u/Sariel007 [link] [comments]
- Does eating too much sugar really make kids hyper? We asked researchers.by /u/newzee1 on April 24, 2024 at 9:45 pm
submitted by /u/newzee1 [link] [comments]
- Why dentists say you shouldn’t rinse after brushingby /u/newzee1 on April 24, 2024 at 9:35 pm
submitted by /u/newzee1 [link] [comments]
- Partnering for a Healthier Detroit: Gilbert Family Foundation & Henry Ford Health's Visionby /u/sbgroup65 on April 24, 2024 at 7:50 pm
submitted by /u/sbgroup65 [link] [comments]
- First patient receives combined pig kidney transplant, heart pumpby /u/VirtualNecromancer on April 24, 2024 at 6:45 pm
submitted by /u/VirtualNecromancer [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL that John Rock, one of the creators of the contraceptive pill, was a devout Catholicby /u/trashconverters on April 25, 2024 at 2:46 am
submitted by /u/trashconverters [link] [comments]
- TIL that the Chicago area has more hot dog restaurants than McDonald's, Wendy's, and Burger King restaurants combinedby /u/Brix001 on April 25, 2024 at 1:32 am
submitted by /u/Brix001 [link] [comments]
- TIL of the Glasgow effect, a term which refers to the lower life expectancy of residents of the Scottish city compared to the rest of the UK and Europe. Some hypotheses for this effect include stress, especially in childhood, leading to ill health; violent gang culture; and rate of premature births.by /u/pukkapaddington on April 25, 2024 at 1:32 am
submitted by /u/pukkapaddington [link] [comments]
- TIL of the mummy of Takabuti, a young ancient Egyptian woman who died from an axe blow to her back. A study of the proteins in her leg muscles allowed researchers to hypothesise that she had been running for some time before she was killed.by /u/roughvandyke on April 24, 2024 at 11:55 pm
submitted by /u/roughvandyke [link] [comments]
- TIL that it took Boeing less than 3 years from starting the 747 project to first flight. The first commercial flight occurred 11 months later.by /u/getthedudesdanny on April 24, 2024 at 11:19 pm
submitted by /u/getthedudesdanny [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- James Webb Space Telescope takes its first images of forming planetary systemsby /u/UVicScience on April 25, 2024 at 1:05 am
submitted by /u/UVicScience [link] [comments]
- Tiny rubber spheres used to make a programmable fluid: “We can [now] make hydraulic actuators soft and self-controlled. The fluid itself is doing all the control for us, so we don’t have to control the robot from the outside”by /u/TurretLauncher on April 25, 2024 at 12:33 am
submitted by /u/TurretLauncher [link] [comments]
- A sweetener used in cakes, soft drinks and chewing gum - neotame (sugar substitute E961), used to replace aspartame - can affect people’s health by weakening the gut wall as it damages intestinal bacteria, which may lead to irritable bowel syndrome or insulin resistance, a new study has found.by /u/mvea on April 24, 2024 at 9:29 pm
submitted by /u/mvea [link] [comments]
- A Florida man with migraines had a CT scan which showed that his brain was infested with tapeworm cysts. A new study hypothesised that he ate undercooked infected pork that contained tapeworm cysts, known as cysticercus, and re-infected himself with eggs passed in his faeces through poor hygiene.by /u/mvea on April 24, 2024 at 9:13 pm
submitted by /u/mvea [link] [comments]
- Early warning sign of extinction. Research found before an extinction pulse 34 million years ago, marine communities became highly specialized everywhere but in the southern high latitudes, implying that these micro-plankton migrated en masse to higher latitudes and away from the tropicsby /u/Wagamaga on April 24, 2024 at 9:04 pm
submitted by /u/Wagamaga [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Heat catch fire from deep, steal Game 2 from Celticsby /u/Oldtimer_2 on April 25, 2024 at 2:54 am
submitted by /u/Oldtimer_2 [link] [comments]
- 'Invest in these women': WNBA champion A'ja Wilson hopes Caitlin Clark's popularity helps the league growby /u/kundu123 on April 25, 2024 at 1:28 am
submitted by /u/kundu123 [link] [comments]
- Tiger Woods, Rory McIlroy to receive massive loyalty payouts from PGA Tour after equity investment, per reportby /u/Oldtimer_2 on April 25, 2024 at 12:50 am
submitted by /u/Oldtimer_2 [link] [comments]
- Hiltzik: The folly of public financing for stadiums - Los Angeles Timesby /u/davster39 on April 24, 2024 at 10:12 pm
submitted by /u/davster39 [link] [comments]
- Chicago Bears give new stadium update, say they will provide over $2B for lakefront domed projectby /u/Oldtimer_2 on April 24, 2024 at 10:02 pm
submitted by /u/Oldtimer_2 [link] [comments]