

Elevate Your Career with AI & Machine Learning For Dummies PRO and Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
What are the top 3 methods used to find Autoregressive Parameters in Data Science?
In order to find autoregressive parameters, you will first need to understand what autoregression is. Autoregression is a statistical method used to create a model that describes data as a function of linear regression of lagged values of the dependent variable. In other words, it is a model that uses past values of a dependent variable in order to predict future values of the same dependent variable.
In time series analysis, autoregression is the use of previous values in a time series to predict future values. In other words, it is a form of regression where the dependent variable is forecasted using a linear combination of past values of the independent variable. The parameter values for the autoregression model are estimated using the method of least squares.
The autoregressive parameters are the coefficients in the autoregressive model. These coefficients can be estimated in a number of ways, including ordinary least squares (OLS), maximum likelihood (ML), or least squares with L1 regularization (LASSO). Once estimated, the autoregressive parameters can be used to predict future values of the dependent variable.
To find the autoregressive parameters, you need to use a method known as least squares regression. This method finds the parameters that minimize the sum of the squared residuals. The residual is simply the difference between the predicted value and the actual value. So, in essence, you are finding the parameters that best fit the data.

How to Estimate Autoregressive Parameters?
There are three main ways to estimate autoregressive parameters: ordinary least squares (OLS), maximum likelihood (ML), or least squares with L1 regularization (LASSO).
Ordinary Least Squares: Ordinary least squares is the simplest and most common method for estimating autoregressive parameters. This method estimates the parameters by minimizing the sum of squared errors between actual and predicted values.
Maximum Likelihood: Maximum likelihood is another common method for estimating autoregressive parameters. This method estimates the parameters by maximizing the likelihood function. The likelihood function is a mathematical function that quantifies the probability of observing a given set of data given certain parameter values.
Least Squares with L1 Regularization: Least squares with L1 regularization is another method for estimating autoregressive parameters. This method estimates the parameters by minimizing the sum of squared errors between actual and predicted values while also penalizing models with many parameters. L1 regularization penalizes models by adding an extra term to the error function that is proportional to the sum of absolute values of the estimator coefficients.
Finding Autoregressive Parameters: The Math Behind It
To find the autoregressive parameters using least squares regression, you first need to set up your data in a certain way. You need to have your dependent variable in one column and your independent variables in other columns. For example, let’s say you want to use three years of data to predict next year’s sales (the dependent variable). Your data would look something like this:
| Year | Sales |
|——|——-|
| 2016 | 100 |
| 2017 | 150 |
| 2018 | 200 |
Next, you need to calculate the means for each column. For our sales example, that would look like this:
$$ \bar{Y} = \frac{100+150+200}{3} = 150$$
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
🚀 Power Your Podcast Like AI Unraveled: Get 20% OFF Google Workspace!
Hey everyone, hope you're enjoying the deep dive on AI Unraveled. Putting these episodes together involves tons of research and organization, especially with complex AI topics.
A key part of my workflow relies heavily on Google Workspace. I use its integrated tools, especially Gemini Pro for brainstorming and NotebookLM for synthesizing research, to help craft some of the very episodes you love. It significantly streamlines the creation process!
Feeling inspired to launch your own podcast or creative project? I genuinely recommend checking out Google Workspace. Beyond the powerful AI and collaboration features I use, you get essentials like a professional email (you@yourbrand.com), cloud storage, video conferencing with Google Meet, and much more.
It's been invaluable for AI Unraveled, and it could be for you too.
Start Your Journey & Save 20%
Google Workspace makes it easy to get started. Try it free for 14 days, and as an AI Unraveled listener, get an exclusive 20% discount on your first year of the Business Standard or Business Plus plan!
Sign Up & Get Your Discount HereUse one of these codes during checkout (Americas Region):
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
Business Standard Plan: 63P4G3ELRPADKQU
Business Standard Plan: 63F7D7CPD9XXUVT
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)

Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
Business Standard Plan: 63FLKQHWV3AEEE6
Business Standard Plan: 63JGLWWK36CP7W
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
Business Plus Plan: M9HNXHX3WC9H7YE
With Google Workspace, you get custom email @yourcompany, the ability to work from anywhere, and tools that easily scale up or down with your needs.
Need more codes or have questions? Email us at info@djamgatech.com.
Now we can calculate each element in what’s called the variance-covariance matrix:
$$ \operatorname {Var} (X)=\sum _{i=1}^{n}\left({x_{i}}-{\bar {x}}\right)^{2} $$
and
$$ \operatorname {Cov} (X,Y)=\sum _{i=1}^{n}\left({x_{i}}-{\bar {x}}\right)\left({y_{i}}-{\bar {y}}\right) $$
For our sales example, that calculation would look like this:
$$ \operatorname {Var} (Y)=\sum _{i=1}^{3}\left({y_{i}}-{\bar {y}}\right)^{2}=(100-150)^{2}+(150-150)^{2}+(200-150)^{2})=2500 $$
and
$$ \operatorname {Cov} (X,Y)=\sum _{i=1}^{3}\left({x_{i}}-{\bar {x}}\right)\left({y_{i}}-{\bar {y}}\right)=(2016-2017)(100-150)+(2017-2017)(150-150)+(2018-2017)(200-150))=-500 $$
Now we can finally calculate our autoregressive parameters! We do that by solving this equation:
$$ \hat {\beta }=(X^{\prime }X)^{-1}X^{\prime }Y=\frac {1}{2500}\times 2500\times (-500)=0.20 $$\.20 . That’s it! Our autoregressive parameter is 0\.20 . Once we have that parameter, we can plug it into our autoregressive equation:
$$ Y_{t+1}=0\.20 Y_t+a_1+a_2+a_3footnote{where $a_1$, $a_2$, and $a_3$ are error terms assuming an AR(3)} .$$ And that’s how you solve for autoregressive parameters! Of course, in reality you would be working with much larger datasets, but the underlying principles are still the same. Once you have your autoregressive parameters, you can plug them into the equation and start making predictions!.
Which Method Should You Use?
The estimation method you should use depends on your particular situation and goals. If you are looking for simple and interpretable results, then Ordinary Least Squares may be the best method for you. If you are looking for more accurate predictions, then Maximum Likelihood or Least Squares with L1 Regularization may be better methods for you.
Autoregressive models STEP BY STEP:
1) Download data: The first step is to download some data. This can be done by finding a publicly available dataset or by using your own data if you have any. For this example, we will be using data from the United Nations Comtrade Database.
2) Choose your variables: Once you have your dataset, you will need to choose the variables you want to use in your autoregression model. In our case, we will be using the import and export values of goods between countries as our independent variables.
3) Estimate your model: After choosing your independent variables, you can estimate your autoregression model using the method of least squares. OLS estimation can be done in many statistical software packages such as R or STATA.
4) Interpret your results: Once you have estimated your model, it is important to interpret the results in order to understand what they mean. The coefficients represent the effect that each independent variable has on the dependent variable. In our case, the coefficients represent the effect that imports and exports have on trade balance. A positive coefficient indicates that an increase in the independent variable leads to an increase in the dependent variable while a negative coefficient indicates that an increase in the independent variable leads to a decrease in the dependent variable.
5)Make predictions: Finally, once you have interpreted your results, you can use your autoregression model to make predictions about future values of the dependent variable based on past values of the independent variables.
Conclusion: In this blog post, we have discussed what autoregression is and how to find autoregressive parameters.
Estimating an autoregression model is a relatively simple process that can be done in many statistical software packages such as R or STATA.
In statistics and machine learning, autoregression is a modeling technique used to describe the linear relationship between a dependent variable and one more independent variables. To find the autoregressive parameters, you can use a method known as least squares regression which minimizes the sum of squared residuals. This blog post also explains how to set up your data for calculating least squares regression as well as how to calculate Variance and Covariance before finally calculating your autoregressive parameters. After finding your parameters you can plug them into an autoregressive equation to start making predictions about future events!
We have also discussed three different methods for estimating those parameters: Ordinary Least Squares, Maximum Likelihood, and Least Squares with L1 Regularization. The appropriate estimation method depends on your particular goals and situation.

Machine Learning For Dummies App
Machine Learning For Dummies on iOs: https://apps.apple.com/
Machine Learning For Dummies on Windows: https://www.
Machine Learning For Dummies Web/Android on Amazon: https://www.amazon.
What are some good datasets for Data Science and Machine Learning?
Machine Learning Engineer Interview Questions and Answers
Machine Learning Breaking News
Transformer – Machine Learning Models
Machine Learning – Software Classification
Autoregressive Model
Autoregressive generative models can estimate complex continuous data distributions such as trajectory rollouts in an RL environment, image intensities, and audio. Traditional techniques discretize continuous data into various bins and approximate the continuous data distribution using categorical distributions over the bins. This approximation is parameter inefficient as it cannot express abrupt changes in density without using a significant number of additional bins. Adaptive Categorical Discretization (ADACAT) is proposed in this paper as a parameterization of 1-D conditionals that is expressive, parameter efficient, and multimodal. A vector of interval widths and masses is used to parameterize the distribution known as ADACAT. Figure 1 showcases the difference between the traditional uniform categorical discretization approach with the proposed ADACAT.
Each component of the ADACAT distribution has non-overlapping support, making it a specific subfamily of mixtures of uniform distributions. ADACAT generalizes uniformly discretized 1-D categorical distributions. The proposed architecture allows for variable bin widths and more closely approximates the modes of two Gaussians mixture than a uniformly discretized categorical, making it highly expressive than the latter. Additionally, a distribution’s support is discretized using quantile-based discretization, which bins data into groups with similar measured data points. ADACAT uses deep autoregressive frameworks to factorize the joint density into numerous 1-D conditional ADACAT distributions in problems with more than one dimension.
Continue reading | Check out the paper and github link.
Pytorch – Computer Application
https://torchmetrics.readthedocs.io/en/stable//index.html
Best practices for training PyTorch model
What are some ways we can use machine learning and artificial intelligence for algorithmic trading in the stock market?
What are some good datasets for Data Science and Machine Learning?
Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers
Machine Learning Engineer Interview Questions and Answers
- [D] xAI Releasing Sexual and Romantic Voice Chatbotsby /u/juliensalinas (Machine Learning) on May 13, 2025 at 1:31 pm
xAI has recently released "Sexy 18+" and "Romantic 18+" for Grok 3 users. It appeared in my Android app a couple of days ago... I usually appreciate the quality of xAI's platform and I think it's a very interesting alternative to OpenAI and Anthropic. But providing sexual voice assistants to everyone without even asking users to opt in is definitely a NO GO for me! AI fans like to say "exciting times ahead", "the future will be amazing" or other naive things like that. Well, flirting with an AI instead of a real human is definitely not part of an "amazing future" according to my standards... Studies show that the level of depression among youngsters is higher than ever. They also show that birth rates are going down all around the world. Pushing AI chatbots as sex partners will make things even worse, no doubt about that. submitted by /u/juliensalinas [link] [comments]
- [D] Thoughts on use of the term AI & whether LLMs are actually a 'step on the way' to advancements in AI?by /u/MatchaWarrior (Machine Learning) on May 13, 2025 at 12:32 pm
For context, I'm a mixed Software / Data Engineer with a few years experience working on various ML projects as part of my day job. I'm not professing to be an expert on GenAI, but I've been thinking about this a lot recently. Is it a commonly held opinion amongst practitioners that the name "AI" for the recent batch of LLMs is in a way harmful to the industry? My understanding of transformers and current LLMs is very far from Artifical Intelligence in a true sense. I don't really see how these models are any more like AI than many traditional ML models on a massive scale. To me, this seems like a misappropriation of the term to drive stock value and convince the public that the tools they are using are more advanced than they actually are. And I feel like when I first started working in ML and GenAI was closer to infancy than widespread adoption, the use of the term AI seemed a bit more guarded and less commonly thrown around. Additionally, is any consensus forming about whether GenAI LLMs are actually a stepping stone towards more advanced AI? Or more of a "side quest" diverting resource and investment away from potential advancements? I'm thinking of opinions shared in posts like this from a while back. Interested to hear your thoughts & happy to be corrected if you feel differently. submitted by /u/MatchaWarrior [link] [comments]
- [R] How do I become an AI Engineer from a Computer Engineering background?by /u/Uncle_Remus_________ (Machine Learning) on May 13, 2025 at 10:46 am
I’m a 25-year-old recent Computer Engineering graduate from the University of Zimbabwe, and I’m aspiring to become an AI Engineer. Is there a clear learning roadmap I can follow to achieve this? Are there reputable self-study resources or platforms you’d recommend? How long does it typically take to gain the necessary skills? I’m also wondering, by the time I’m job-ready, would I be considered too old to be hired as a junior? submitted by /u/Uncle_Remus_________ [link] [comments]
- [N] The Reinforcement Learning and Video Games Workshop @RLC 2025by /u/RLVideoGamesWorkshop (Machine Learning) on May 13, 2025 at 8:41 am
Hi everyone, We invite you to submit your work to the Reinforcement Learning and Video Games (RLVG) workshop, which will be held on August 5th, 2025, as part of the Reinforcement Learning Conference (RLC 2025). Call for Papers: We invite submissions about recent advances, challenges, and applications in the intersection of reinforcement learning and videogames. The topics of interest include, but are not limited to, the following topics: RL approaches for large state spaces, large action spaces, or partially observable scenarios; Long-horizon and continual reinforcement learning; Human-AI collaboration and adaptation in multi-agent scenarios; RL for non-player characters (NPCs), opponents, or QA agents; RL for procedural content generation and personalization; Applications of RL to improve gameplay experience. Confirmed Speakers: James MacGlashan, Sony AI Ida Momennejad, Microsoft Research Roberta Raileanu, Meta AI Pablo Samuel Castro, MILA, Google Deepmind Julian Togelius, NYU, modl.ai Michael Bowling, University of Alberta Important Dates: Submission Deadline: May 30th, 2025 (AOE) Acceptance Notification: June 15th, 2025 Submission Details: We accept both long-form (8 pages) and short-form (4 pages) papers, excluding references and appendices. We strongly encourage submissions from authors across academia and industry. In addition to mature results, we also welcome early-stage ideas, position papers, and negative results that can spark meaningful discussion within the community. For more information, please refer to our website. Contacts: Please send your questions to rlvg2025[at]gmail.com, and follow our Bluesky account u/rlvgworkshop.bsky.social for more updates. submitted by /u/RLVideoGamesWorkshop [link] [comments]
- [R] Fine-tuning help for hierarchy structure generationby /u/False-Fig-8535 (Machine Learning) on May 13, 2025 at 6:33 am
Hi everyone. I have to automate a process using a local LLM to generate the tree structure based on the input given. Input and output are as follows: Input: Fruits (100 | 50) Apples (50 | 30) Mangoes (50 | 20) Vegetables (50 | 20) Onions (30 | 20) Cabbage (20 | NA) Output: Groceries (Total: 150 | 70) |_ Fruits (100 | 50) | |_Apples (50 | 30) | |_Mangoes (50 | 20) |_ Vegetables (50 | 20) . . .|_Onions (30 | 20) . . . |_Cabbage (20 | NA) The two values in each category are from the current and previous years. Values have to be preserved. I'm currently training seq2seq models, but I'm failing to get proper results. Top node contains the overall total of parent nodes (Fruits and Vegetables). Parent node contains the total of child nodes. Can anyone help me what is the best way to train a model based on this information? Fyi, my dataset contains: instruction: " ", input: " ", output: " " Edit: Onions and Cabbage have to be aligned right below Vegetables. Ignore the dots used. submitted by /u/False-Fig-8535 [link] [comments]
- [P] GNN Link Prediction (GraphSAGE/PyG) - Validation AUC Consistently Below 0.5 Despite Overfitting Controlby /u/Head_Mushroom_3748 (Machine Learning) on May 13, 2025 at 6:33 am
Hi everyone, I'm working on a task dependency prediction problem using Graph Neural Networks with PyTorch Geometric. The goal is to predict directed precedence links (A -> B) between tasks within specific sets (called "gammes", typically ~50-60 tasks at inference). Data & Features: I'm currently training on a subset of historical data related to one equipment type family ("ballon"). This subset has ~14k nodes (tasks) and ~15k edges (known dependencies), forming a Directed Acyclic Graph (DAG). Node features (data.x fed into the first GNN layer, dim ~401): Sentence Embeddings (from sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2, dim 384) for the task name (Nom de l'activite), which is semantically important. Learned categorical embeddings (via torch.nn.Embedding, dim 16) for the specific equipment type variant (3 unique types in this subset). Normalized duration (1 dim). The original Gamme name and Projet source were found to be uninformative and are not used as input features. Data Splitting: Using torch_geometric.transforms.RandomLinkSplit (num_val=0.1, num_test=0.1, is_undirected=False, add_negative_train_samples=True, neg_sampling_ratio=1.0, split_labels=True). Model Architecture: Encoder: 2-layer GraphSAGEEncoder (using SAGEConv) that takes node features + type embeddings and edge_index (training links) to produce node embeddings (currently dim=32). Includes ReLU and Dropout(0.5) between layers. class GraphSAGEEncoder(nn.Module): def init(self, input_feat_dim, hidden_dim, output_dim, num_types, type_embed_dim, num_layers=2): """ Initializes the GraphSAGE encoder. Args: input_feat_dim (int): Dimension of continuous input features (e.g., 384 name embedding + 1 normalized duration = 385). hidden_dim (int): Dimension of GraphSAGE hidden layers and learned embeddings. output_dim (int): Dimension of the final node embedding. num_types (int): Total number of unique 'Equipment Type'. type_embed_dim (int): Desired dimension for the 'Equipment Type' embedding. num_layers (int): Number of SAGEConv layers (e.g., 2 or 3). """ super(GraphSAGEEncoder, self).__init__() # Embedding layer for Equipment Type self.type_embedding = nn.Embedding(num_types, type_embed_dim) # Input dimension for the first SAGEConv layer # It's the sum of continuous features + type embedding actual_input_dim = input_feat_dim + type_embed_dim self.convs = nn.ModuleList() # First layer self.convs.append(SAGEConv(actual_input_dim, hidden_dim)) # Subsequent hidden layers for _ in range(num_layers - 2): self.convs.append(SAGEConv(hidden_dim, hidden_dim)) # Final layer to output dimension self.convs.append(SAGEConv(hidden_dim, output_dim)) self.num_layers = num_layers def forward(self, x, edge_index, type_equip_ids): """ Forward pass of the encoder. Args: x (Tensor): Continuous node features [num_nodes, input_feat_dim]. edge_index (LongTensor): Graph structure [2, num_edges]. type_equip_ids (LongTensor): Integer IDs of the equipment type for each node [num_nodes]. Returns: Tensor: Final node embeddings [num_nodes, output_dim]. """ # 1. Get embeddings for equipment types type_embs = self.type_embedding(type_equip_ids) # 2. Concatenate with continuous features x_combined = torch.cat([x, type_embs], dim=-1) # 3. Pass through SAGEConv layers for i in range(self.num_layers): x_combined = self.convs[i](x_combined, edge_index) # Apply activation (except maybe for the last layer) if i < self.num_layers - 1: x_combined = F.relu(x_combined) x_combined = F.dropout(x_combined, p=0.5, training=self.training) # Dropout for regularization return x_combined Link Predictor: Simple MLP that takes embeddings of source u and target v nodes and predicts link logits. (Initially included pooled global context, but removing it gave slightly better initial AUC, so currently removed). Input dim 2 * 32, hidden dim 32, output dim 1. class LinkPredictor(nn.Module): def __init__(self, embedding_dim, hidden_dim=64): super(LinkPredictor, self).__init__() self.layer_1 = nn.Linear(embedding_dim * 2, hidden_dim) self.layer_2 = nn.Linear(hidden_dim, 1) def forward(self, emb_u, emb_v): # Concatenate only emb_u and emb_v combined_embs = torch.cat([emb_u, emb_v], dim=-1) x = F.relu(self.layer_1(combined_embs)) x = self.layer_2(x) return x # Still returning the logits Training Setup: Optimizer: AdamW(lr=1e-4, weight_decay=1e-5) (also tried other LRs and weight decay values). Loss: torch.nn.BCEWithLogitsLoss. Process: Full-batch. Generate all node embeddings using the encoder, then predict logits for positive and negative edge pairs specified by train_data.pos_edge_label_index and train_data.neg_edge_label_index, combine logits and labels (1s and 0s) for loss calculation. Validation is similar using val_data. The Problem: The model learns the training data (training loss decreases steadily, e.g., from ~0.69 down to ~0.57). However, it fails to generalize: Validation loss starts okay but increases epoch after epoch (overfitting). Crucially, Validation AUC consistently drops well below 0.5 (e.g., starts around 0.5-0.57 in the very first epoch, then quickly drops to ~0.25-0.45) and stays there. This happens across various hyperparameter settings (LR, weight decay, model dimensions). What I've Tried: Reducing model complexity (hidden/output dimensions). Adjusting learning rate (1e-3, 1e-4, 1e-5). Adding/adjusting weight_decay (0, 1e-6, 1e-5). Removing the explicit global context pooling from the link predictor. Verified input features (data.x) don't contain NaNs. Training runs without numerical stability issues (no NaN loss currently). My Question: What could be causing the validation AUC to consistently be significantly below 0.5 in this GNN link prediction setup ? What changes could i possibly do in my architecture if it is too simple ? submitted by /u/Head_Mushroom_3748 [link] [comments]
- [D] Had an AI Engineer interview recently and the startup wanted to fine-tune sub-80b parameter models for their platform, why?by /u/Sunshineallon (Machine Learning) on May 13, 2025 at 6:32 am
I'm a Full-Stack engineer working mostly on serving and scaling AI models. For the past two years I worked with start ups on AI products (AI exec coach), and we usually decided that we would go the fine tuning route only when prompt engineering and tooling would be insufficient to produce the quality that we want. Yesterday I had an interview for a startup the builds a no-code agent platform, which insisted on fine-tuning the models that they use. As someone who haven't done fine tuning for the last 3 years, I was wondering about what would be the use case for it and more specifically, why would it economically make sense, considering the costs of collecting and curating data for fine tuning, building the pipelines for continuous learning and the training costs, especially when there are competitors who serve a similar solution through prompt engineering and tooling which are faster to iterate and cheaper. Did anyone here arrived at a problem where the fine-tuning route was a better solution than better prompt engineering? what was the problem and what made the decision? submitted by /u/Sunshineallon [link] [comments]
- Direct Random Target Projection [R]by /u/PlugTheGreatest (Machine Learning) on May 13, 2025 at 2:47 am
Hey im a college student and I was reading a paper on DRTP and it really interested me this is a AI/ML algorithm and they made it hit 95% accuracy in Python with 2 hidden layers eaching having anywhere from 500-1000 neurons I was able to recreate it in C with one hidden layer and 256 neurons and I hit 90% on the MNIST data set (https://github.com/JaimeCasanovaCodes/c-drtp-mnist) here is the link to the repo leave me any suggestions im new to ML submitted by /u/PlugTheGreatest [link] [comments]
- [P] Introducing moors and pymoors, multi objective optimization with genetic algoritmo frameworks, in Python and Rustby /u/Heavy_Host_246 (Machine Learning) on May 13, 2025 at 2:17 am
Hello everyone! 👋 I'm excited to introduce my Python library pymoors** and its Rust core **moors. You can find both in the moo-rs GitHub repository. moors is a pure Rust crate that implements all the multi-objective evolutionary algorithms. It is designed to be completely independent of Python. **pymoors** is a Python binding built with PyO3, providing access to the moors algorithms from Python. I'm currently benchmarking pymoors against the popular pymoo library. So far, pymoors has shown promising results—achieving up to 6.5x speedups over pymoo. You can check out a comparison example here: py-moors-examples. This is still an early-stage project and there's a lot of room for improvement. I'm actively looking for collaborators—so if you're interested in evolutionary algorithms, Rust/Python interop, or performance optimization, feel free to reach out or contribute! submitted by /u/Heavy_Host_246 [link] [comments]
- What do you use to build dashboards?by /u/alexellman (Data Science) on May 12, 2025 at 9:52 pm
Hi guys, I've been a data scientist for 5 years. I've done lots of different types of work and unfortunately that has included a lot of dashboarding (no offense if you enjoy making dashboards). I'm wondering what tools people here are using and if you like them. In my career I've used mode, looker, streamlit and retool off the top of my head. I think mode was my favorite because you could type sql right into it and get the charts you wanted but still was overall unsatisfied with it. I'm wondering what tools the people here are using and if you find it meets all your needs? One of my frustrations with these tools is that even platforms like Looker—designed to be self-serve for general staff—end up being confusing for people without a data science background. Are there any tools (maybe powered my LLMs now) that allow non data science people to write prompts that update production dashboards? A simple example is if you have a revenue dashboard showing net revenue and a PM, director etc wanted you to add an additional gross revenue metric. With the tools I'm aware of I would have to go into the BI tool and update the chart myself to show that metric. Are there any tools that allow you to just type in a prompt and make those kinds of edits? submitted by /u/alexellman [link] [comments]
- [D] MICCAI 2025 Review Resultsby /u/Long_Equal_5923 (Machine Learning) on May 12, 2025 at 9:37 pm
Hi everyone, Has anyone heard any updates about MICCAI 2025 results? It seems like they haven’t been announced yet—has anyone received their reviews? Thanks! submitted by /u/Long_Equal_5923 [link] [comments]
- [R] NeurIPS 2025 Appendix Submissionby /u/Accomplished_Newt923 (Machine Learning) on May 12, 2025 at 9:30 pm
Hello All. As far as I understand, we can add the technical appendices with the main paper before the full paper submission deadline or as a separate PDF with the supplementary materials. Does it have any negative effect if I do the latter one to add more experiments in the appendix with one week extra time? Thanks submitted by /u/Accomplished_Newt923 [link] [comments]
- [8 YoE] 7 Years Software Engineer Trying to Pivot to Data Analytics/Science/Machine Learningby /u/PraiseChrist420 (Data Science) on May 12, 2025 at 8:03 pm
submitted by /u/PraiseChrist420 [link] [comments]
- Do open source contributors still need to do coding challenges?by /u/James_c7 (Data Science) on May 12, 2025 at 8:00 pm
I’ve become an avid open source contributor over the past few years in a few popular ML, Econ, and Jax ecosystem packages. In my opinion being able to take someone else’s code and fix bugs or add features is a much better signal than leetcode and hacker rank. I’m really hoping I don’t have to study leetcode/hackerrank for my next job search (DS/MLE roles) and I’d rather just keep doing open source work that’s more relevant. For the other open source contributors out there - are you ever able to get out of coding challenges by citing your own pull requests? submitted by /u/James_c7 [link] [comments]
- [P] Why are two random vectors near orthogonal in high dimensions?by /u/madiyar (Machine Learning) on May 12, 2025 at 7:05 pm
Hi, Recently, I was curious why two random vectors are almost always orthogonal in high dimensions. I prepared an interactive post for this explanation https://maitbayev.github.io/posts/random-two-vectors/ Feel free to ask questions here submitted by /u/madiyar [link] [comments]
- Now you're paying an analyst $50/hr to standardize date formats instead of doing actual analysis work.by /u/ElectrikMetriks (Data Science) on May 12, 2025 at 5:12 pm
submitted by /u/ElectrikMetriks [link] [comments]
- [P] I built a 3D tool to visualize how optimizers (SGD, Adam, etc.) traverse a loss surface — helped me finally understand how they behave!by /u/SnooCupcakes5746 (Machine Learning) on May 12, 2025 at 4:18 pm
Hey everyone! I've been learning about optimization algorithms in machine learning, and I kept struggling to intuitively grasp how different ones behave — like why Adam converges faster or how momentum helps in tricky landscapes. So I built a 3D visualizer that shows how these optimizers move across a custom loss surface. You can: Enter your own loss function Choose an optimizer (SGD, Momentum, RMSProp, Adam, etc.) Tune learning rate, momentum, etc. Click to drop a starting point and watch the optimizer move in 3D It's fully interactive and can be really helpful to understand the dynamics. Here’s a short demo (Website): https://i.redd.it/c69gnqn9md0f1.gif I’d love feedback or thoughts from others learning optimization. If anyone's interested, I can post the GitHub repo. submitted by /u/SnooCupcakes5746 [link] [comments]
- "Day Since Last X" feature preprocessingby /u/Ok-Needleworker-6122 (Data Science) on May 12, 2025 at 4:13 pm
Hi Everyone! Bit of a technical modeling question here. Apologies if this is very basic preprocessing stuff but I'm a younger data scientist working in industry and I'm still learning. Say you have a pretty standard binary classification model predicting 1 = we should market to this customer and 0 = we should not market to this customer (the exact labeling scheme is a bit proprietary). I have a few features that are in the style "days since last touchpoint". For example "days since we last emailed this person" or "days since we last sold to this person". However, a solid percentage of the rows are NULL, meaning we have never emailed or sold to this person. Any thoughts on how should I handle NULLs for this type of column? I've been imputing with MAX(days since we last sold to this person) + 1 but I'm starting to think that could be confusing my model. I think the reality of the situation is that someone with 1 purchase a long time ago is a lot more likely to purchase today than someone who has never purchased anything at all. The person with 0 purchases may not even be interested in our product, while we have evidence that the person with 1 purchase a long time ago is at least a fit for our product. Imputing with MAX(days since we last sold to this person) + 1 poses these two cases as very similar to the model. For reference I'm testing with several tree-based models (light GBM and random forest) and comparing metrics to pick between the architecture options. So far I've been getting the best results with light GBM. One thing I'm thinking about is whether I should just leave the people who have never sold as NULLs and have my model pick the direction to split for missing values. (I believe this would work with LightGBM but not RandomForest). Another option is to break down the "days since last sale" feature into categories, maybe quantiles with a special category for NULLS, and then dummy encode. Has anyone else used these types of "days since last touchpoint" features in propensity modeling/marketing modeling? submitted by /u/Ok-Needleworker-6122 [link] [comments]
- [P] Llama 3.2 1B-Based Conversational Assistant Fully On-Device (No Cloud, Works Offline)by /u/Economy-Mud-6626 (Machine Learning) on May 12, 2025 at 3:59 pm
I’m launching a privacy-first mobile assistant that runs a Llama 3.2 1B Instruct model, Whisper Tiny ASR, and Kokoro TTS, all fully on-device. What makes it different: Entire pipeline (ASR → LLM → TTS) runs locally Works with no internet connection No user data ever touches the cloud Built on ONNX runtime and a custom on-device Python→AST→C++ execution layer SDK We believe on-device AI assistants are the future — especially as people look for alternatives to cloud-bound models and surveillance-heavy platforms. submitted by /u/Economy-Mud-6626 [link] [comments]
- [D] Researchers in egocentric vision, what papers do you recommend to get started?by /u/fullgoopy_alchemist (Machine Learning) on May 12, 2025 at 3:24 pm
I'm looking to get my feet wet in egocentric vision, and was hoping to get some recommendations on papers/resources you'd consider important to get started with research in this area. submitted by /u/fullgoopy_alchemist [link] [comments]
What is Google Workspace?
Google Workspace is a cloud-based productivity suite that helps teams communicate, collaborate and get things done from anywhere and on any device. It's simple to set up, use and manage, so your business can focus on what really matters.
Watch a video or find out more here.
Here are some highlights:
Business email for your domain
Look professional and communicate as you@yourcompany.com. Gmail's simple features help you build your brand while getting more done.
Access from any location or device
Check emails, share files, edit documents, hold video meetings and more, whether you're at work, at home or on the move. You can pick up where you left off from a computer, tablet or phone.
Enterprise-level management tools
Robust admin settings give you total command over users, devices, security and more.
Sign up using my link https://referworkspace.app.goo.gl/Q371 and get a 14-day trial, and message me to get an exclusive discount when you try Google Workspace for your business.
Google Workspace Business Standard Promotion code for the Americas
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
Email me for more promo codes
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech

Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....

List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA

Health Health, a science-based community to discuss human health
- New York’s largest budget in history invests $34.2B in Medicaidby /u/statenislandadvance on May 12, 2025 at 8:46 pm
submitted by /u/statenislandadvance [link] [comments]
- Could COVID Boosters Be Delayed This Fall Because Of The Trump Admin? Experts Weigh In.by /u/huffpost on May 12, 2025 at 7:49 pm
submitted by /u/huffpost [link] [comments]
- A Devastating New Exposé of Johnson & Johnson Indicts an Entire Systemby /u/thenewrepublic on May 12, 2025 at 6:45 pm
submitted by /u/thenewrepublic [link] [comments]
- U.S. halts cattle imports from Mexico, citing fears of flesh-eating maggotby /u/James_Fortis on May 12, 2025 at 6:23 pm
submitted by /u/James_Fortis [link] [comments]
- Beyond politics, even U.S. vaccine experts may envision fewer Covid shots in the futureby /u/statnews on May 12, 2025 at 6:02 pm
submitted by /u/statnews [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL about the psychiatric disorder 'glass delusion! The disorder caused people to fear that they were made of glass and likely to shatter. It was recorded in Europe, mainly in the 15th to 17th centuries.by /u/ReagenLamborghini on May 13, 2025 at 5:02 am
submitted by /u/ReagenLamborghini [link] [comments]
- TIL in 1870, Italy completed its unification by defeating the Papal States, which contained Rome. Though his army was outnumbered, the Pope insisted on symbolic resistance before surrendering, resulting in ~68 deaths. Rome was captured, and the Pope’s territory was eventually reduced to Vatican Cityby /u/nehala on May 13, 2025 at 5:01 am
submitted by /u/nehala [link] [comments]
- TIL the states of Alabama, Mississippi, South Carolina, and Texas officially recognize Confederate Memorial Day as a state holiday.by /u/Shadowpika655 on May 13, 2025 at 3:23 am
submitted by /u/Shadowpika655 [link] [comments]
- TIL that the children’s choir in “Another Brick in the Wall, pt. 2” was recorded by Pink Floyd’s producer and engineer without the band’s knowledge. The children were paid with concert tickets, an album and a single; only decades later did they file a claim to receive royalties.by /u/ICanStopTheRain on May 13, 2025 at 2:26 am
submitted by /u/ICanStopTheRain [link] [comments]
- TIL that people living near river valleys, especially the Mississippi River Valley, are often infected by a soil fungus known as Histoplasma capsalatum. Most infections are 'subclinical' and go unnoticed. Researchers found that 90% of the population of Kansas City had been infected at one time.by /u/bland_dad on May 13, 2025 at 1:10 am
submitted by /u/bland_dad [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- A new study finds 86% of global croplands now face more soil exposure to climate extremes – risking carbon loss, erosion, and food security.by /u/calliope_kekule on May 13, 2025 at 4:54 am
submitted by /u/calliope_kekule [link] [comments]
- New study suggests that Texas’s abortion restrictions were associated with increases in mental distress among females of reproductive age, especially among younger individuals who may have less ability to overcome barriers to abortion care.by /u/mvea on May 13, 2025 at 2:01 am
submitted by /u/mvea [link] [comments]
- Scientists use AI to read thoughts and reconstruct sentences straight from brain activityby /u/Atari_Historian on May 12, 2025 at 9:46 pm
submitted by /u/Atari_Historian [link] [comments]
- Using AI to measure bones in 1520 bird species, researchers show that birds from warmer climates have longer wing bones. This suggests that the need for thermoregulation may have influenced evolution of wings in birds.by /u/andyhfell on May 12, 2025 at 9:25 pm
submitted by /u/andyhfell [link] [comments]
- Cats generally live longer than dogs. New research suggests that longer livespans of mammals like cats could be linked to their bigger brains and more complex immune systems. The study found that those with bigger brains and longer lifespans tend to invest more heavily in immune-related genes.by /u/mvea on May 12, 2025 at 7:37 pm
submitted by /u/mvea [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Warriors' Stephen Curry: 'Even if I wanted to be Superman, I couldn't'by /u/PrincessBananas85 on May 13, 2025 at 11:42 am
submitted by /u/PrincessBananas85 [link] [comments]
- Novak Djokovic and Andy Murray end coaching partnership after six months working togetherby /u/JeSuisDecuEnBien on May 13, 2025 at 8:14 am
submitted by /u/JeSuisDecuEnBien [link] [comments]
- Oswaldo Cabrera serious injury on slide attemptby /u/Chitokane928 on May 13, 2025 at 5:12 am
submitted by /u/Chitokane928 [link] [comments]
- Knicks take a 3-1 lead over the Celtics with a 121-113 victory as Tatum is injured in final minutesby /u/Oldtimer_2 on May 13, 2025 at 2:25 am
submitted by /u/Oldtimer_2 [link] [comments]
- Walker, Svechnikov score late as Hurricanes push Capitals to brink of elimination in 5-2 winby /u/Oldtimer_2 on May 13, 2025 at 2:18 am
submitted by /u/Oldtimer_2 [link] [comments]