Elevate Your Career with AI & Machine Learning For Dummies PRO
Ready to accelerate your career in the fast-growing fields of AI and machine learning? Our app offers user-friendly tutorials and interactive exercises designed to boost your skills and make you stand out to employers. Whether you're aiming for a promotion or searching for a better job, AI & Machine Learning For Dummies PRO is your gateway to success. Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
Download the AI & Machine Learning For Dummies PRO App:
iOS - Android
Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
What is the Best Machine Learning Algorithms for Imbalanced Datasets?
In machine learning, imbalanced datasets are those where one class heavily outnumbers the others. This can be due to the nature of the problem or simply because more data is available for one class than the others. Either way, imbalanced datasets can pose a challenge for machine learning algorithms. In this blog post, we’ll take a look at which machine learning algorithms are best suited for imbalanced datasets and why they tend to perform better than others.
For example, in a binary classification problem, if there are 100 observations, and only 10 of them are positive (the rest are negatives), then we say that the dataset is imbalanced. The ratio of positive to negative cases is 1:10.
There are a few reasons why some machine learning algorithms tend to perform better on imbalanced datasets than others. First, certain algorithms are designed to handle imbalanced datasets. Second, some algorithms are more robust to outliers, which can be more common in imbalanced datasets. And third, some algorithms are better able to learn from a limited amount of data, which can be an issue when one class is heavily outnumbered by the others.
Some of the best machine learning algorithms for imbalanced datasets include:
– Support Vector Machines (SVMs),
– Decision Trees,
– Random Forests,
– Naive Bayes Classifiers,
– k-Nearest Neighbors (kNN),
Of these, SVMs tend to be the most popular choice as they are specifically designed to handle imbalanced datasets. SVMs work by finding a hyperplane that maximizes the margin between the two classes. This helps to reduce overfitting and improve generalization. Decision trees and random forests are also popular choices as they are less sensitive to outliers than other algorithms such as linear regression. Naive Bayes classifiers are another good choice as they are able to learn from a limited amount of data. kNN is also a good choice as it is not sensitive to outliers and is able to learn from a limited amount of data. However, it can be computationally intensive for large datasets.
There are two main types of machine learning algorithms: supervised and unsupervised. Supervised algorithms tend to perform better on imbalanced datasets than unsupervised algorithms. In this blog post, we will discuss why this is so and look at some examples.
Supervised Algorithms
Supervised algorithms are those where the target variable is known. In other words, we have training data where the correct answers are already given. The algorithm then learns from this data and is able to generalize to new data. Some examples of supervised algorithms are regression and classification.
Unsupervised Algorithms
Unsupervised algorithms are those where the target variable is not known. With unsupervised algorithms, we only have input data, without any corresponding output labels. The algorithm has to learn from the data itself without any guidance. Some examples of unsupervised algorithms are clustering and dimensionality reduction.
Why Supervised Algorithms Perform Better on Imbalanced Datasets
The reason why supervised algorithms perform better on imbalanced datasets is because they can learn from the training data which cases are more important. With unsupervised algorithms, all data points are treated equally, regardless of whether they are in the minority or majority class.
For example, in a binary classification problem with an imbalanced dataset, let’s say that we want to predict whether a customer will default on their loan payment or not. We have a training dataset of 1000 customers, out of which only 100 (10%) have defaulted on their loan in the past.
If we use a supervised algorithm like logistic regression, the algorithm will learn from the training data that defaulting on a loan is rare (since only 10% of cases in the training data are Positive). This means that it will be more likely to predict correctly that a new customer will not default on their loan (since this is the majority class in the training data).
However, if we use an unsupervised algorithm like k-means clustering, all data points will be treated equally since there is no target variable to guide the algorithm. This means that it might incorrectly cluster together customers who have defaulted on their loans with those who haven’t since there is no guidance provided by a target variable.
Conclusion:
In conclusion, supervised machine learning algorithms tend to perform better on imbalanced datasets than unsupervised machine learning algorithms because they can learn from the training data which cases are more important.
Some machine learning algorithms tend to perform better on highly imbalanced datasets because they are designed to deal with imbalance or because they can learn from both classes simultaneously. If you are working with a highly imbalanced dataset, then you should consider using one of these algorithms.
Thanks for reading!
How are machine learning techniques being used to address unstructured data challenges?
Machine learning techniques are being used to address unstructured data challenges in a number of ways:
- Natural language processing (NLP): NLP algorithms can be used to extract meaningful information from unstructured text data, such as emails, documents, and social media posts. NLP algorithms can be trained to classify text data, identify key terms and concepts, and extract structured data from unstructured text.
- Image recognition: Machine learning algorithms can be used to analyze and classify images, enabling the automatic identification and classification of objects, people, and other elements in images. This can be useful for tasks such as image tagging and search, as well as for applications such as security and surveillance.
- Audio and speech recognition: Machine learning algorithms can be used to analyze and classify audio data, enabling the automatic transcription and translation of spoken language. This can be useful for tasks such as speech-to-text transcription, as well as for applications such as call center automation and language translation.
- Video analysis: Machine learning algorithms can be used to analyze and classify video data, enabling the automatic detection and classification of objects, people, and other elements in video. This can be useful for tasks such as video tagging and search, as well as for applications such as security and surveillance.
Overall, machine learning techniques are being used in a wide range of applications to extract meaningful information from unstructured data, and to enable the automatic classification and analysis of data in a variety of formats.
How is AI and machine learning impacting application development today?
Artificial intelligence (AI) and machine learning are having a significant impact on application development today in a number of ways:
- Enabling new capabilities: AI and machine learning algorithms can be used to enable applications to perform tasks that would be difficult or impossible for humans to do. For example, AI-powered applications can be used to analyze and classify large amounts of data, or to automate complex decision-making processes.
- Improving performance: AI and machine learning algorithms can be used to optimize the performance of applications, making them faster, more efficient, and more accurate. For example, machine learning algorithms can be used to improve the accuracy of predictive models, or to optimize the performance of search algorithms.
- Streamlining development: AI and machine learning algorithms can be used to automate various aspects of application development, such as testing, debugging, and deployment. This can help to streamline the development process and reduce the time and resources needed to build and maintain applications.
- Enhancing user experiences: AI and machine learning algorithms can be used to enhance the user experience of applications, by providing personalized recommendations, recommendations, or by enabling applications to anticipate and respond to the needs and preferences of users.
Overall, AI and machine learning are having a significant impact on application development today, and they are likely to continue to shape the way applications are built and used in the future.
How will advancements in artificial intelligence and machine learning shape the future of work and society?
Advancements in artificial intelligence (AI) and machine learning are likely to shape the future of work and society in a number of ways. Some potential impacts include:
- Automation: AI and machine learning algorithms can be used to automate tasks that are currently performed by humans, such as data entry, customer service, and manufacturing. This could lead to changes in the types of jobs that are available and the skills that are in demand, as well as to increased productivity and efficiency.
- Job displacement: While automation may create new job opportunities, it could also lead to job displacement, particularly for workers in industries that are more susceptible to automation. This could lead to social and economic challenges, including unemployment and income inequality.
- Increased efficiency: AI and machine learning algorithms can be used to optimize and streamline business processes, leading to increased efficiency and productivity. This could lead to economic growth and innovation, and could also help to reduce costs for businesses and consumers.
- Enhanced decision-making: AI and machine learning algorithms can be used to analyze large amounts of data and make more informed and accurate decisions. This could lead to improved outcomes in fields such as healthcare, finance, and education, and could also help to reduce bias and improve fairness.
Overall, the impact of AI and machine learning on the future of work and society is likely to be significant and complex, with both potential benefits and challenges. It will be important to consider and address these impacts as these technologies continue to advance and become more widely adopted.
- "[Project]" Grad Student - Help deciding project for my profileby /u/violetwatch (Machine Learning) on November 4, 2024 at 10:43 pm
Hi, I am a grad student, doing my masters in ECE - ML specialization, going for a job after my masters. My resume is full of ML projects - related to biomedical applications. I am about to start a new big project. I am trying to figure out if I should do a project related to the same domain or in another domain. Which would be better for the job market out there? submitted by /u/violetwatch [link] [comments]
- What problems do Large Language Models (LLMs) actually solve very well? [D]by /u/Educational-String94 (Machine Learning) on November 4, 2024 at 8:52 pm
While there's growing skepticism about the AI hype cycle, particularly around chatbots and RAG systems, I'm interested in identifying specific problems where LLMs demonstrably outperform traditional methods in terms of accuracy, cost, or efficiency. Problems I can think of are: - words categorization - sentiment analysis of no-large body of text - image recognition (to some extent) - writing style transfer (to some extent) what else? submitted by /u/Educational-String94 [link] [comments]
- [D] Resources for adding cross attention to a pretrained language modelby /u/BinaryOperation (Machine Learning) on November 4, 2024 at 4:03 pm
I want to train new cross attention layers feeding into a pretrained transformer (maybe a small llama model) while keeping the rest of the model constant. What are some resources that might be helpful? submitted by /u/BinaryOperation [link] [comments]
- [D] Is there limited quantization in all LLM models? For example you can take a standard model like meta-llama/Llama-3.2-1B and run it at half, but there are also models specifically made for 4bit quantization (i.e. meta-llama/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8)by /u/xil35 (Machine Learning) on November 4, 2024 at 3:54 pm
I'm just trying to understand how quantization is setup in all the models. Standard models like meta-llama/Llama-3.2-1B can be run without quantization (bfloat16 or float32?), or they can be told to run at half (float16?) with an inferencing app (like vLLM). So does that mean there is some quantization build into all models? Instead of telling it to run at half quantization, can I instead say int8? Or does that only work if the model was built for it? And then there are models that are specifically built for int4 (i.e. meta-llama/Llama-3.2-1B-Instruct-SpinQuant_INT4_EO8). Does that mean that when you run this model with vLLM, you have to explicitly say you are running it at int4, or you just leave that at default and it will automatically run at int4? Can that be overridden to int8? Or is it just hardcoded for int4? Just been trying to wrap my head around this for the past 2 days. submitted by /u/xil35 [link] [comments]
- [D] COLING25 Industry Track: Notification of Acceptanceby /u/BlackEyesBrownSavant (Machine Learning) on November 4, 2024 at 3:44 pm
The date for "notification of acceptance" was 12:00 anywhere on Earth, November 3rd of 2024. And we've not heard back from the chairs, there's no notification on the portal as well, is there a delay? Or is it that only those papers which are accepted will receive a notification? Please share any info/updates on this, thanks. submitted by /u/BlackEyesBrownSavant [link] [comments]
- [R] Estimation of multivariate mutual information, PID for more than three variablesby /u/Sandy_dude (Machine Learning) on November 4, 2024 at 1:09 pm
Why isn't estiming high dimensional mutual information popular. For instance the most I've seen is 3 variable. I know the number of samples needed exponentially increases. But in big data settings it would still be feasible. Discrimination is also an issue since estimation is usually performed for binned data. Anyone know more about this and the practical applications of more than three variable mutual information? On very interested in reading about applications to infer relationship between high dimensional variables in data sets with large number of samples. submitted by /u/Sandy_dude [link] [comments]
- What differentiates the top % in ML/ DL? [discussion]by /u/k44yej88 (Machine Learning) on November 4, 2024 at 9:45 am
Hello! I'll start by saying that I work in the recruitment field, internally for one of the leading tech/ AI companies in the 🌍. My background has historically been Software, and I am transitioning in to AI. I am a diligent professional and usually take basic technical entry tests to fundamentally understand the area/ infrastructure that I recruit for. I wanted to ask this great community: 🤔 What differentiates the top % of ML/ DL engineers? Is it hands-on experience and SME in a particular subset of AI, or more rounded knowledge of most areas? 🤔 Is reddit the largest community online for people in ML/ DL? would you recommend any particular platforms to network aside from Linkedin, Reddit and Slack? I am aware that I will get messaged due to my profession, but I have had incredible interactions in the product/ Eng space on Reddit over the years and your insights are invaluable. Thank you submitted by /u/k44yej88 [link] [comments]
- [P] Text classification with low number of data: LLM or other classification models?by /u/mtmttuan (Machine Learning) on November 4, 2024 at 4:47 am
I have a project where I need to summarize a few webpages related to a subject and use the summaries to classify aforementioned subject. When prototyping, I use LLM for both summarization and classification task and they did achieve about 80% accuracy (classification task isn't that hard anyway). For the sake of performance and the hate of using LLM for everything, I initially want to train 2 models, one for summarization and one for classification. The problem arise when I see that most usable summazier is not that lighter than a small LLM (and not many support my language). Add another classifier like bert or something then the difference in memory consumption is probably negligible. Though runtime should still be better. Another problem is that my dataset is just about 2000 webpages for about 300 subjects and 70 classes. Many classes has 0 or 1 samples. With that data, I think finetuning a summazier is somewhat doable while it's probably not applicable to the classifier. Getting more data is not exactly an option as I don't have the time budget. Despite that, I have detail description of what should be classified into each class. As a result, my current solution is to finetune a LLM to do both summarization and classification. The downside is that LLM sometimes gives invalid classes. Is adding a classification head to the LLM a good solution? I'm afraid that I don't have enough data to train even that classification head (realisticly a single classification matrix). Or is there a better approach than this? submitted by /u/mtmttuan [link] [comments]
- [P] Combining algorithms in an autonomous driving projectby /u/AlbertV999 (Machine Learning) on November 4, 2024 at 2:37 am
I am planning to do a project consisting of an autonomous driving system. I was thinking of using reinforcement learning but it would take too long to train (months), with the consequent expenditure of electricity and money (specialized servers). After seeing some videos from Sentdex and others where, after training for 2 months in a row, the driver manages to drive like a drunk person, I have considered it unfeasible and I have thought: Would it be possible to combine a deep learning algorithm with reinforcement learning together with a traditional computer vision algorithm like lane finding? Is there any way to make these algorithms work together, reducing the training time? Would you use other algorithms or approaches? I'm using CARLA Sim. Thanks. submitted by /u/AlbertV999 [link] [comments]
- [D] How to read and perform a semantic query over the combination of structured data and unstructured data ? As in, for example, a large number of pdf documents with text and also structured data that occurs in lists/tables in the pdf or as numerical data mentioned inside text paragraphs.by /u/SpaceShip992 (Machine Learning) on November 4, 2024 at 1:49 am
To carry the example further, say these pdfs are financial reports from different companies that contain quarterly revenue data amongst other data. There are two aspects to the broader problem: Query and Read as further elaborated below QUERY : So I want to be able to make arbirary queries against all the pdfs , like "find companies where Year over year quarterly growth was greater than 10% and which also mentioned new product launches". This is a simple example, but actual use case can be arbirarily more complex with more aspects to the query. READ (feed in) new reports: I also want to make it so that non-technical users can drop in new pdf reports as they become available which then get added to the query database without manual involvement of technical personnel in pre-processing Can you please guide me as to how to approach this problem and what options exist to implement something like this? Thanks. submitted by /u/SpaceShip992 [link] [comments]
- [P] Benchmarking 1 Million Files from ImageNet into DVC, Git-LFS, and Oxen.ai for Open Source Dataset Collaborationby /u/FallMindless3563 (Machine Learning) on November 3, 2024 at 11:43 pm
Hey all! If you haven't seen the Oxen project yet, we have been building a fast open source unstructured data version control tool and platform to host the data (https://oxen.ai). It’s an alternative to dumping data on Hugging Face with git-lfs or their datasets library and goes together with their models like chocolate and peanut butter - Oxen can be used for iterating on and editing the data and Hugging Face for public models. We were inspired by the idea of making large machine learning datasets living & breathing assets that people can collaborate on, rather than the static dumps. Lately we have been working hard on optimizing the underlying Merkle Trees and data structures with in Oxen.ai and just released v0.19.4 which provides a bunch of performance upgrades and stability to the internal APIs. 1 Million Files Benchmark To put it all to the test, we decided to benchmark the tool on the 1 million+ images in the classic ImageNet dataset. The TLDR is Oxen.ai is faster than raw uploads to S3, 13x faster than git-lfs, and 5x faster than DVC. The full breakdown can be found here 👇 https://docs.oxen.ai/features/performance If you are in the ML/AI community, or just data aficionados, would love to get your feedback on both the tool and the codebase. We would love some community contribution when it comes to different storage backends and integrations into other data tools. submitted by /u/FallMindless3563 [link] [comments]
- [D] Comparison of Logistic Regression with/without SMOTEby /u/Janky222 (Machine Learning) on November 3, 2024 at 10:42 pm
This has been driving me crazy at work. I've been evaluating a logistic predictive model. The model implements SMOTE to balance the dataset to 1:1 ratio (originally 7% of the desired outcome). I believe this to be unnecessary as shifting the decision threshold would be sufficient and avoid unnecessary data imputation. The dataset has more than 9,000 ocurrences of the desired event - this is more than enough for MLE estimation. My colleagues don't agree. I built a shiny app in R to compare the confusion matrixes of both models, along with some metrics. I would welcome some input from the community on this comparison. To me the non-smote model performs just as well, or even better if looking at the Brier Score or calibration intercept. What do you guys think? submitted by /u/Janky222 [link] [comments]
- [D] Feature Selection + Feature Eng. Order of Operationsby /u/Secret_Valuable_Yes (Machine Learning) on November 3, 2024 at 10:37 pm
Anyone have a preferred methodology and order of operations for performing feature selection with feature engineering? For example, is the best practice to drop unimportant features first, then iteratively engineer new features? submitted by /u/Secret_Valuable_Yes [link] [comments]
- [R] Training multiple autoencoders reduces loss but not accuracy?by /u/Grand_Comparison2081 (Machine Learning) on November 3, 2024 at 10:26 pm
Hello, I am training two seperate autoencoders to cluster data. The network passes the input to both autoencoders, computes the reconstruction error for both autoencoders (AE) and picks the best one. This means that only the reconstruction of one AE contributes to the loss and so only one gets gradient updates per input. Loss decreases but accuracy just fluctuates. Moreover, both autoencoders are used but eventually the model just uses the same autoencoder for almost all inputs. Any insight on why? The goal is for each AE to learn to reconstruct datapoints that are neighbors or belong to same cluster. I’ve seen papers doing the same thing but they just pre-train their network to go around this and never discuss WHY this happens. Ty submitted by /u/Grand_Comparison2081 [link] [comments]
- [D] What are some good resources for learning about sequence modeling architecturesby /u/vicky0212 (Machine Learning) on November 3, 2024 at 10:11 pm
What are some good resources for learning about sequence modeling architectures? I've been preparing for exams and interviews and came across this quiz on GitHub: https://viso.ai/deep-learning/sequential-models/ and another practice site: https://app.wittybyte.ai/problems/rnn_lstm_tx. Do you think these are comprehensive, or should I look for more material? Both are free to use right now submitted by /u/vicky0212 [link] [comments]
- Video Input for the current LLMs [P]by /u/rohit3627 (Machine Learning) on November 3, 2024 at 10:04 pm
Hey everyone, I’m excited to share a project I’ve been working on OpenSceneSense. It’s a Python package designed to bridge video content with large language models (LLMs) like OpenAI’s Vision models and OpenRouter, opening up new ways to understand, analyze, and create insights from video data. Why OpenSceneSense? Most LLMs are amazing with text but aren’t designed to handle video directly. OpenSceneSense changes that. It uses frame-by-frame analysis, audio transcription, and scene detection to turn video data into something LLMs can work with. Imagine using a prompt to get a detailed description of what’s happening in each scene or automatically creating a narrative that ties the video and audio together. Potential Use Cases: - Dataset Creation: If you’re working in computer vision or machine learning, OpenSceneSense can create richly annotated datasets from videos, giving LLMs detailed context about visual events, object interactions, and even sentiment shifts across scenes. - Content Moderation: OpenSceneSense can bring more context to content moderation. Unlike traditional moderation methods that might just detect keywords or simple visuals, this tool can interpret entire scenes, combining both visual and audio cues. It could help distinguish between genuinely problematic content and innocuous material that might otherwise get flagged. And I’m also working on an Ollama-compatible version so you can run it locally without relying on the cloud, which will be useful for anyone concerned about privacy or latency. To dive in, you’ll need Python 3.10+, FFmpeg, and a couple of API keys (OpenAI or OpenRouter). Install it with `pip install openscenesense`, and you’re all set. From there, it’s easy to start analyzing your videos and experimenting with different prompts to customize what you want to extract. I’d love feedback from anyone working in video tech, dataset creation, or moderation. Check out the code, give it a spin, and let’s see where we can take OpenSceneSense together! https://github.com/ymrohit/openscenesense submitted by /u/rohit3627 [link] [comments]
- [D] Self-hostable tooling for offline batch-prediction on SQL tablesby /u/benelott (Machine Learning) on November 3, 2024 at 9:53 pm
Hey folks, I am working for a hospital in Switzerland and due to data regulations, it is quite clear that we need to stay out of cloud environments. Our hospital has a MSSQL-based data warehouse and we have a separate docker-compose based ML-ops stack. Some of our models are currently running in docker containers with a REST api, but actually, we just do scheduled batch-prediction on the data in the DWH. In principle, I am looking for a stack that allows you to host ml models from scikit learn to pytorch and allows us to formulate a batch prediction on data in the SQL tables by defining input from one table as input features for the model and write back the results to another table. I have seen postgresml and its predict_batch, but I am wondering if we can get something like this directly interacting with our DWH? What do you suggest as an architecture or tooling for batch predicting data in SQL DBs when the results will be in SQL DBs again and all predictions can be precomputed? Thanks for your help! submitted by /u/benelott [link] [comments]
- [D] Fourier weights neural networksby /u/musescore1983 (Machine Learning) on November 3, 2024 at 8:27 pm
Dear ML community, I wanted to share an idea for discussion about the usage of Fourier coefficients to parametrize weights in neural networks. Typically in MLPs the weights are defined only in one direction, and are undefined in the other direction, which leaves it open: we can define the weights to be symmetric: w(r,s) = w(s,r) and we can use the Fourier coefficients of a two variable symmetric function to compute the weights via backpropagation and gradient descent. (I should mention that I am currently activeyl searching for an opportunity to bring my knowledge of Machine Learning to projects near Frankfurt am Main ,Germany.) Edit: Maybe my wording was not so correct. Let us agree that in most cases the symmetry assumption is satisfied by MLPs with invertible activation function. The idea I would like to discuss is the usage of Fourier coefficients to (re-) construct the weights w(r,s) = w(s,r) . For this idea to make sense the FWNN do not learn the weights as usual MLPs / ANNs , but they learn the _coefficients_ of the Fourier series (at least some of them). By adjusting how many coefficients are learned, the FWNN could adjust its capacity to learn. Notice that by symmetry of the function w(r,s) we get terms like sum_{j] c_j*cos(j * (r+s) ) where j ranges over some predefined range [-R,R] of integers. In theory this R should be infinity hence Z = [-inf, +inf] are the whole integers. Notice also that the parameter c_j the network learns are 2*R+1 in number, which at first glance is independent of the number of neurons N. Hence a traditional neural network with N neurons, has in theory to learn O(N^2) weights, but with the Fourier transform we reduce this number of parameters to 2*R+1. Of course it can happen that R = N^2 but I can imagine that there are problems where 2*R+1 << N^2. I hope this clarifies the idea. Code: https://github.com/githubuser1983/fourier_weighted_neural_network/blob/main/fourier_weighted_neural_network.py Explanation of the method: https://www.academia.edu/125262107/Fourier_Weighted_Neural_Networks_Enhancing_Efficiency_and_Performance submitted by /u/musescore1983 [link] [comments]
- [D] AAAI Phase 2 Resultsby /u/Massive_Horror9038 (Machine Learning) on November 3, 2024 at 4:39 pm
When should we expect the results from phase 2 of AAAI 2025 submissions? On the site, the authors feedback is from day 4 to day 8 of November. Are we going to receive the results today, day 3? submitted by /u/Massive_Horror9038 [link] [comments]
- [D] AAAI 2025 Phase 2 Reviewsby /u/quasi-literate (Machine Learning) on November 3, 2024 at 4:09 pm
The reviews will be available soon. This is a thread for discussion/rants. Be polite in comments. submitted by /u/quasi-literate [link] [comments]
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech
Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....
List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic
- Sugar in the first 1,000 days after conception linked to health issues later in life, study saysby /u/cnn on November 4, 2024 at 5:39 pm
submitted by /u/cnn [link] [comments]
- Opinion | Doctors Should Have to Report Medically Impaired Drivers (Gift Article)by /u/nytopinion on November 4, 2024 at 5:24 pm
submitted by /u/nytopinion [link] [comments]
- Winter depression is real and there are many ways to fight backby /u/CTVNEWS on November 4, 2024 at 4:45 pm
submitted by /u/CTVNEWS [link] [comments]
- 'Changed my life': This Canadian woman was struggling with multiple chronic health problems. A diabetes program changed everythingby /u/CTVNEWS on November 4, 2024 at 4:20 pm
submitted by /u/CTVNEWS [link] [comments]
- New magnetoelastic sensor within a stent transmits information about potential bile blockages,by /u/mareacaspica on November 4, 2024 at 1:39 pm
submitted by /u/mareacaspica [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL in the 1940s, Dr. John C. Cutler led US-funded syphilis experiments in Guatemala, secretly infecting prisoners and patients, including a terminally ill woman, to study penicillin. Justified as "for the greater good," it remained hidden until 2010 when the US government issued a formal apology.by /u/efequalma on November 4, 2024 at 5:37 pm
submitted by /u/efequalma [link] [comments]
- TIL Romans were known to create tombs for their dogs and gave them epitaphs to remember them by. One such inscription read, “I am in tears, while carrying you to your last resting place as much as I rejoiced when bringing you home with my own hands 15 years ago.”by /u/Glittering_Bed_3118 on November 4, 2024 at 4:59 pm
submitted by /u/Glittering_Bed_3118 [link] [comments]
- TIL that there is a conspiracy theory that Avril Lavigne died in 2003 and was replaced by a body double - Melissa Vandellaby /u/VadimH on November 4, 2024 at 4:58 pm
submitted by /u/VadimH [link] [comments]
- TIL that various animals have been awarded degrees, exposing the fraudulent activities of “degree mills.” One instance includes a cat that received an MBA and had a 3.5 GPA.by /u/Warm-Profit-775 on November 4, 2024 at 4:25 pm
submitted by /u/Warm-Profit-775 [link] [comments]
- TIL that in middle ages suicidal people feared eternal damnation that direct suicide entailed, so they would commit a capital crime over innocent child and then turn themselves in to authorities and demand capital punishmentby /u/Saymoran on November 4, 2024 at 3:19 pm
submitted by /u/Saymoran [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- New study reveals blood sugar control is a key factor in slowing brain aging, highlighting the benefits of the Mediterranean dietby /u/Hashirama4AP on November 4, 2024 at 5:52 pm
submitted by /u/Hashirama4AP [link] [comments]
- Americans believe in the benevolence of nature, and this belief is not lower in people who have experienced natural disastersby /u/SilverDragon1 on November 4, 2024 at 5:35 pm
submitted by /u/SilverDragon1 [link] [comments]
- This Black Hole Is Eating Stuff at Over 40 Times The Theoretical Limitby /u/binh021220 on November 4, 2024 at 4:52 pm
submitted by /u/binh021220 [link] [comments]
- A new trial study involving 900 parent-infant pairs shows that combining a new digital intervention (text messages and a web dashboard) with traditional health behavior counseling is an effective early method to prevent childhood obesity during the first 24 months of lifeby /u/giuliomagnifico on November 4, 2024 at 4:44 pm
submitted by /u/giuliomagnifico [link] [comments]
- New research has developed a predictive equation to understand Atmospheric rivers’ strength and movement, which may enhance extreme weather forecastingby /u/Hashirama4AP on November 4, 2024 at 3:11 pm
submitted by /u/Hashirama4AP [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Laura muir receives medal 9 years after she won itby /u/Eragon089 on November 4, 2024 at 5:11 pm
submitted by /u/Eragon089 [link] [comments]
- England's defeat 'nothing to do with Ford' - Smithby /u/Eragon089 on November 4, 2024 at 5:08 pm
submitted by /u/Eragon089 [link] [comments]
- Spurs coach Gregg Popovich sidelined indefinitely with undisclosed illnessby /u/Oldtimer_2 on November 4, 2024 at 4:43 pm
submitted by /u/Oldtimer_2 [link] [comments]
- Sources: Saints fire Allen after 7th loss in rowby /u/pandas795 on November 4, 2024 at 3:13 pm
submitted by /u/pandas795 [link] [comments]
- 'I can't see the players': Blind high school football player breaks barriersby /u/Forward-Answer-4407 on November 4, 2024 at 3:01 pm
submitted by /u/Forward-Answer-4407 [link] [comments]