DjamgaMind: Audio Intelligence for the C-Suite (Daily AI News, Energy, Healthcare, Finance)
Full-Stack AI Intelligence. Zero Noise.The definitive audio briefing for the C-Suite and AI Architects. From Daily News and Strategic Deep Dives to high-density Industrial & Regulatory Intelligence—decoded at the speed of the AI era. . 👉 Start your specialized audio briefing today at Djamgamind.com
AI Jobs and Career
I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
- Full Stack Engineer [$150K-$220K]
- Software Engineer, Tooling & AI Workflow, Contract [$90/hour]
- DevOps Engineer, India, Contract [$90/hour]
- More AI Jobs Opportunitieshere
| Job Title | Status | Pay |
|---|---|---|
| Full-Stack Engineer | Strong match, Full-time | $150K - $220K / year |
| Developer Experience and Productivity Engineer | Pre-qualified, Full-time | $160K - $300K / year |
| Software Engineer - Tooling & AI Workflows (Contract) | Contract | $90 / hour |
| DevOps Engineer (India) | Full-time | $20K - $50K / year |
| Senior Full-Stack Engineer | Full-time | $2.8K - $4K / week |
| Enterprise IT & Cloud Domain Expert - India | Contract | $20 - $30 / hour |
| Senior Software Engineer | Contract | $100 - $200 / hour |
| Senior Software Engineer | Pre-qualified, Full-time | $150K - $300K / year |
| Senior Full-Stack Engineer: Latin America | Full-time | $1.6K - $2.1K / week |
| Software Engineering Expert | Contract | $50 - $150 / hour |
| Generalist Video Annotators | Contract | $45 / hour |
| Generalist Writing Expert | Contract | $45 / hour |
| Editors, Fact Checkers, & Data Quality Reviewers | Contract | $50 - $60 / hour |
| Multilingual Expert | Contract | $54 / hour |
| Mathematics Expert (PhD) | Contract | $60 - $80 / hour |
| Software Engineer - India | Contract | $20 - $45 / hour |
| Physics Expert (PhD) | Contract | $60 - $80 / hour |
| Finance Expert | Contract | $150 / hour |
| Designers | Contract | $50 - $70 / hour |
| Chemistry Expert (PhD) | Contract | $60 - $80 / hour |
What is OpenAI Q*? A deeper look at the Q* Model as a combination of A* algorithms and Deep Q-learning networks.
Embark on a journey of discovery with our podcast, ‘What is OpenAI Q*? A Deeper Look at the Q* Model’. Dive into the cutting-edge world of AI as we unravel the mysteries of OpenAI’s Q* model, a groundbreaking blend of A* algorithms and Deep Q-learning networks. 🌟🤖
In this detailed exploration, we dissect the components of the Q* model, explaining how A* algorithms’ pathfinding prowess synergizes with the adaptive decision-making capabilities of Deep Q-learning networks. This video is perfect for anyone curious about the intricacies of AI models and their real-world applications.
Understand the significance of this fusion in AI technology and how it’s pushing the boundaries of machine learning, problem-solving, and strategic planning. We also delve into the potential implications of Q* in various sectors, discussing both the exciting possibilities and the ethical considerations.
Join the conversation about the future of AI and share your thoughts on how models like Q* are shaping the landscape. Don’t forget to like, share, and subscribe for more deep dives into the fascinating world of artificial intelligence! #OpenAIQStar #AStarAlgorithms #DeepQLearning #ArtificialIntelligence #MachineLearningInnovation”
🚀 Whether you’re a tech enthusiast, a professional in the field, or simply curious about artificial intelligence, this podcast is your go-to source for all things AI. Subscribe for weekly updates and deep dives into artificial intelligence innovations.
AI-Powered Professional Certification Quiz Platform
Web|iOs|Android|Windows
Are you passionate about AI and looking for your next career challenge? In the fast-evolving world of artificial intelligence, connecting with the right opportunities can make all the difference. We're excited to recommend Mercor, a premier platform dedicated to bridging the gap between exceptional AI professionals and innovative companies.
Whether you're seeking roles in machine learning, data science, or other cutting-edge AI fields, Mercor offers a streamlined path to your ideal position. Explore the possibilities and accelerate your AI career by visiting Mercor through our exclusive referral link:
Find Your AI Dream Job on Mercor
Your next big opportunity in AI could be just a click away!
✅ Don’t forget to Like, Comment, and Share this video to support our content.
📌 Check out our playlist for more AI insights
AI- Powered Jobs Interview Warmup For Job Seekers

⚽️Comparative Analysis: Top Calgary Amateur Soccer Clubs – Outdoor 2025 Season (Kids' Programs by Age Group)
📖 Read along with the podcast:
Welcome to AI Unraveled, the podcast that demystifies frequently asked questions on artificial intelligence and keeps you up to date with the latest AI trends. Join us as we delve into groundbreaking research, innovative applications, and emerging technologies that are pushing the boundaries of AI. From the latest trends in ChatGPT and the recent merger of Google Brain and DeepMind, to the exciting developments in generative AI, we’ve got you covered with a comprehensive update on the ever-evolving AI landscape. In today’s episode, we’ll cover rumors surrounding a groundbreaking AI called Q*, OpenAI’s leaked AI breakthrough called Q* and DeepMind’s similar project, the potential of AI replacing human jobs in tasks like wire sending, and a recommended book called “AI Unraveled” that answers frequently asked questions about artificial intelligence.
Rumors have been circulating about a groundbreaking AI known as Q* (pronounced Q-Star), which is closely tied to a series of chaotic events that disrupted OpenAI following the sudden dismissal of their CEO, Sam Altman. In this discussion, we will explore the implications of Altman’s firing, speculate on potential reasons behind it, and consider Microsoft’s pursuit of a monopoly on highly efficient AI technologies.
AI Jobs and Career
And before we wrap up today's AI news, I wanted to share an exciting opportunity for those of you looking to advance your careers in the AI space. You know how rapidly the landscape is evolving, and finding the right fit can be a challenge. That's why I'm excited about Mercor – they're a platform specifically designed to connect top-tier AI talent with leading companies. Whether you're a data scientist, machine learning engineer, or something else entirely, Mercor can help you find your next big role. If you're ready to take the next step in your AI career, check them out through my referral link: https://work.mercor.com/?referralCode=82d5f4e3-e1a3-4064-963f-c197bb2c8db1. It's a fantastic resource, and I encourage you to explore the opportunities they have available.
To comprehend the significance of Q*, it is essential to delve into the theory of combining Q-learning and A* algorithms. Q* is an AI that excels in grade-school mathematics without relying on external aids like Wolfram. This achievement is revolutionary and challenges common perceptions of AI as mere information repeaters and stochastic parrots. Q* showcases iterative learning, intricate logic, and highly effective long-term strategizing, potentially paving the way for advancements in scientific research and breaking down previously insurmountable barriers.
Let’s first understand A* algorithms and Q-learning to grasp the context in which Q* operates. A* algorithms are powerful tools used to find the shortest path between two points in a graph or map while efficiently navigating obstacles. These algorithms excel at optimizing route planning when efficiency is crucial. In the case of chatbot AI, A* algorithms are used to traverse complex information landscapes and locate the most relevant responses or solutions for user queries.
Invest in your future today by enrolling in this Azure Fundamentals - Pass the Azure Fundamentals Exam with Ease: Master the AZ-900 Certification with the Comprehensive Exam Preparation Guide!
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
On the other hand, Q-learning involves providing the AI with a constantly expanding cheat sheet to help it make the best decisions based on past experiences. However, in complex scenarios with numerous states and actions, maintaining a large cheat sheet becomes impractical. Deep Q-learning addresses this challenge by utilizing neural networks to approximate the Q-value function, making it more efficient. Instead of a colossal Q-table, the network maps input states to action-Q-value pairs, providing a compact cheat sheet to navigate complex scenarios efficiently. This approach allows AI agents to choose actions using the Epsilon-Greedy approach, sometimes exploring randomly and sometimes relying on the best-known actions predicted by the networks. DQNs (Deep Q-networks) typically use two neural networks—the main and target networks—which periodically synchronize their weights, enhancing learning and stabilizing the overall process. This synchronization is crucial for achieving self-improvement, which is a remarkable feat. Additionally, the Bellman equation plays a role in updating weights using Experience replay, a sampling and training technique based on past actions, which allows the AI to learn in small batches without requiring training after every step.
Q* represents more than a math prodigy; it signifies the potential to scale abstract goal navigation, enabling highly efficient, realistic, and logical planning for any query or goal. However, with such capabilities come challenges.
One challenge is web crawling and navigating complex websites. Just as a robot solving a maze may encounter convoluted pathways and dead ends, the web is labyrinthine and filled with myriad paths. While A* algorithms aid in seeking the shortest path, intricate websites or information silos can confuse the AI, leading it astray. Furthermore, the speed of algorithm updates may lag behind the expansion of the web, potentially hindering the AI’s ability to adapt promptly to changes in website structures or emerging information.
Another challenge arises in the application of Q-learning to high-dimensional data. The web contains various data types, from text to multimedia and interactive elements. Deep Q-learning struggles with high-dimensional data, where the number of features exceeds the number of observations. In such cases, if the AI encounters sites with complex structures or extensive multimedia content, efficiently processing such information becomes a significant challenge.
To address these issues, a delicate balance must be struck between optimizing pathfinding efficiency and adapting swiftly to the dynamic nature of the web. This balance ensures that users receive the most relevant and efficient solutions to their queries.
In conclusion, speculations surrounding Q* and the Gemini models suggest that enabling AI to plan is a highly rewarding but risky endeavor. As we continue researching and developing these technologies, it is crucial to prioritize AI safety protocols and put guardrails in place. This precautionary approach prevents the potential for AI to turn against us. Are we on the brink of an AI paradigm shift, or are these rumors mere distractions? Share your thoughts and join in this evolving AI saga—a front-row seat to the future!
Please note that the information presented here is based on speculation sourced from various news articles, research, and rumors surrounding Q*. Hence, it is advisable to approach this discussion with caution and consider it in light of further developments in the field.
How the Rumors about Q* Started
There have been recent rumors surrounding a supposed AI breakthrough called Q*, which allegedly involves a combination of Q-learning and A*. These rumors were initially sparked when OpenAI, the renowned artificial intelligence research organization, accidentally leaked information about this groundbreaking development, specifically mentioning Q*’s impressive ability to ace grade-school math. However, it is crucial to note that these rumors were subsequently refuted by OpenAI.
It is worth mentioning that DeepMind, another prominent player in the AI field, is also working on a similar project called Gemini. Gemina is based on AlphaGo-style Monte Carlo Tree Search and aims to scale up the capabilities of these algorithms. The scalability of such systems is crucial in planning for increasingly abstract goals and achieving agentic behavior. These concepts have been extensively discussed and explored within the academic community for some time.
The origin of the rumors can be traced back to a letter sent by several staff researchers at OpenAI to the organization’s board of directors. The letter served as a warning highlighting the potential threat to humanity posed by a powerful AI discovery. This letter specifically referenced the supposed breakthrough known as Q* (pronounced Q-Star) and its implications.
Mira Murati, a representative of OpenAI, confirmed that the letter regarding the AI breakthrough was directly responsible for the subsequent actions taken by the board. The new model, when provided with vast computing resources, demonstrated the ability to solve certain mathematical problems. Although it performed at the level of grade-school students in mathematics, the researchers’ optimism about Q*’s future success grew due to its proficiency in such tests.
A notable theory regarding the nature of OpenAI’s alleged breakthrough is that Q* may be related to Q-learning. One possibility is that Q* represents the optimal solution of the Bellman equation. Another hypothesis suggests that Q* could be a combination of the A* algorithm and Q-learning. Additionally, some speculate that Q* might involve AlphaGo-style Monte Carlo Tree Search of the token trajectory. This idea builds upon previous research, such as AlphaCode, which demonstrated significant improvements in competitive programming through brute-force sampling in an LLM (Language and Learning Model). These speculations lead many to believe that Q* might be focused on solving math problems effectively.
Considering DeepMind’s involvement, experts also draw parallels between their Gemini project and OpenAI’s Q*. Gemini aims to combine the strengths of AlphaGo-type systems, particularly in terms of language capabilities, with new innovations that are expected to be quite intriguing. Demis Hassabis, a prominent figure at DeepMind, stated that Gemini would utilize AlphaZero-based MCTS (Monte Carlo Tree Search) through chains of thought. This aligns with DeepMind Chief AGI scientist Shane Legg’s perspective that starting a search is crucial for creative problem-solving.
It is important to note that amidst the excitement and speculation surrounding OpenAI’s alleged breakthrough, the academic community has already extensively explored similar ideas. In the past six months alone, numerous papers have discussed the combination of tree-of-thought, graph search, state-space reinforcement learning, and LLMs (Language and Learning Models). This context reminds us that while Q* might be a significant development, it is not entirely unprecedented.
OpenAI’s spokesperson, Lindsey Held Bolton, has officially rebuked the rumors surrounding Q*. In a statement provided to The Verge, Bolton clarified that Mira Murati only informed employees about the media reports regarding the situation and did not comment on the accuracy of the information.
In conclusion, rumors regarding OpenAI’s Q* project have generated significant interest and speculation. The alleged breakthrough combines concepts from Q-learning and A*, potentially leading to advancements in solving math problems. Furthermore, DeepMind’s Gemini project shares similarities with Q*, aiming to integrate the strengths of AlphaGo-type systems with language capabilities. While the academic community has explored similar ideas extensively, the potential impact of Q* and Gemini on planning for abstract goals and achieving agentic behavior remains an exciting prospect within the field of artificial intelligence.
In simple terms, long-range planning and multi-modal models together create an economic agent. Allow me to paint a scenario for you: Picture yourself working at a bank. A notification appears, asking what you are currently doing. You reply, “sending a wire for a customer.” An AI system observes your actions, noting a path and policy for mimicking the process.
The next time you mention “sending a wire for a customer,” the AI system initiates the learned process. However, it may make a few errors, requiring your guidance to correct them. The AI system then repeats this learning process with all 500 individuals in your job role.
Within a week, it becomes capable of recognizing incoming emails, extracting relevant information, navigating to the wire sending window, completing the required information, and ultimately sending the wire.
This approach combines long-term planning, a reward system, and reinforcement learning policies, akin to Q* A* methods. If planning and reinforcing actions through a multi-modal AI prove successful, it is possible that jobs traditionally carried out by humans using keyboards could become obsolete within the span of 1 to 3 years.
If you are keen to enhance your knowledge about artificial intelligence, there is an invaluable resource that can provide the answers you seek. “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence” is a must-have book that can help expand your understanding of this fascinating field. You can easily find this essential book at various reputable online platforms such as Etsy, Shopify, Apple, Google, or Amazon.
AI Unraveled offers a comprehensive exploration of commonly asked questions about artificial intelligence. With its informative and insightful content, this book unravels the complexities of AI in a clear and concise manner. Whether you are a beginner or have some familiarity with the subject, this book is designed to cater to various levels of knowledge.
By delving into key concepts, AI Unraveled provides readers with a solid foundation in artificial intelligence. It covers a wide range of topics, including machine learning, deep learning, neural networks, natural language processing, and much more. The book also addresses the ethical implications and social impact of AI, ensuring a well-rounded understanding of this rapidly advancing technology.
Obtaining a copy of “AI Unraveled” will empower you with the knowledge necessary to navigate the complex world of artificial intelligence. Whether you are an individual looking to expand your expertise or a professional seeking to stay ahead in the industry, this book is an essential resource that deserves a place in your collection. Don’t miss the opportunity to demystify the frequently asked questions about AI with this invaluable book.
In today’s episode, we discussed the groundbreaking AI Q*, which combines A* Algorithms and Q-learning, and how it is being developed by OpenAI and DeepMind, as well as the potential future impact of AI on job replacement, and a recommended book called “AI Unraveled” that answers common questions about artificial intelligence. Join us next time on AI Unraveled as we continue to demystify frequently asked questions on artificial intelligence and bring you the latest trends in AI, including ChatGPT advancements and the exciting collaboration between Google Brain and DeepMind. Stay informed, stay curious, and don’t forget to subscribe for more!
📢 Advertise with us and Sponsorship Opportunities
Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon
Improving Q* (SoftMax with Hierarchical Curiosity)
Combining efficiency in handling large action spaces with curiosity-driven exploration.
Source: GitHub – RichardAragon/Softmaxwithhierarchicalcuriosity
Softmaxwithhierarchicalcuriosity
Adaptive Softmax with Hierarchical Curiosity
This algorithm combines the strengths of Adaptive Softmax and Hierarchical Curiosity to achieve better performance and efficiency.
Adaptive Softmax
Adaptive Softmax is a technique that improves the efficiency of reinforcement learning by dynamically adjusting the granularity of the action space. In Q*, the action space is typically represented as a one-hot vector, which can be inefficient for large action spaces. Adaptive Softmax addresses this issue by dividing the action space into clusters and assigning higher probabilities to actions within the most promising clusters.
Hierarchical Curiosity
Hierarchical Curiosity is a technique that encourages exploration by introducing a curiosity bonus to the reward function. The curiosity bonus is based on the difference between the predicted reward and the actual reward, motivating the agent to explore areas of the environment that are likely to provide new information.
Combining Adaptive Softmax and Hierarchical Curiosity
By combining Adaptive Softmax and Hierarchical Curiosity, we can achieve a more efficient and exploration-driven reinforcement learning algorithm. Adaptive Softmax improves the efficiency of the algorithm, while Hierarchical Curiosity encourages exploration and potentially leads to better performance in the long run.
Here’s the proposed algorithm:
Initialize the Q-values for all actions in all states.
At each time step:
a. Observe the current state s.
b. Select an action a according to an exploration policy that balances exploration and exploitation.
c. Execute action a and observe the resulting state s’ and reward r.
d. Update the Q-value for action a in state s:
Q(s, a) = (1 – α) * Q(s, a) + α * (r + γ * max_a’ Q(s’, a’))
where α is the learning rate and γ is the discount factor.
e. Update the curiosity bonus for state s:
curio(s) = β * |r – Q(s, a)|
where β is the curiosity parameter.
f. Update the probability distribution over actions:
p(a | s) = exp(Q(s, a) + curio(s)) / ∑_a’ exp(Q(s, a’) + curio(s))
Repeat steps 2a-2f until the termination criterion is met.
The combination of Adaptive Softmax and Hierarchical Curiosity addresses the limitations of Q* and promotes more efficient and effective exploration.
- News article: Can AI Replace Humans for Market Research?by /u/XIFAQ (Artificial Intelligence) on March 9, 2026 at 7:31 pm
What are your thoughts on this: Can AI Replace Humans for Market Research? Yes or No or Maybe ? submitted by /u/XIFAQ [link] [comments]
- CodeGraphContext (MCP server to index code into a graph) now has a website playground for experimentby /u/Desperate-Ad-9679 (Artificial Intelligence) on March 9, 2026 at 7:11 pm
Hey everyone! I have been developing CodeGraphContext, an open-source MCP server transforming code into a symbol-level code graph, as opposed to text-based code analysis. This means that AI agents won’t be sending entire code blocks to the model, but can retrieve context via: function calls, imported modules, class inheritance, file dependencies etc. This allows AI agents (and humans!) to better grasp how code is internally connected. What it does CodeGraphContext analyzes a code repository, generating a code graph of: files, functions, classes, modules and their relationships, etc. AI agents can then query this graph to retrieve only the relevant context, reducing hallucinations. Playground Demo on website I've also added a playground demo that lets you play with small repos directly. You can load a project from: a local code folder, a GitHub repo, a GitLab repo Everything runs on the local client browser. For larger repos, it’s recommended to get the full version from pip or Docker. Additionally, the playground lets you visually explore code links and relationships. I’m also adding support for architecture diagrams and chatting with the codebase. Status so far- ⭐ ~1.5k GitHub stars 🍴 350+ forks 📦 100k+ downloads combined If you’re building AI dev tooling, MCP servers, or code intelligence systems, I’d love your feedback. Repo: https://github.com/CodeGraphContext/CodeGraphContext submitted by /u/Desperate-Ad-9679 [link] [comments]
- Is AI killing open-source software?by /u/CackleRooster (Artificial Intelligence) on March 9, 2026 at 7:00 pm
"AI isn’t killing open-source software. Still, it is actively undermining the social and economic assumptions projects have relied on for decades, and it’s doing so faster than most community leaders and executives are prepared to handle." submitted by /u/CackleRooster [link] [comments]
- With little effort, autonomous AI agents can be manipulated into leaking private information, sharing documents and even erasing entire email servers, researchers find.by /u/NGNResearch (Artificial Intelligence) on March 9, 2026 at 6:43 pm
A team of researchers at Northeastern University began toying with a new kind of autonomous AI “agent.” The more they tested the capabilities and limits of these AI models, which have persistent memory and can take some actions on their own, the more troubling behavior they witnessed. The “agents of chaos” struggled to keep secrets and were easily guilt tripped into divulging information. submitted by /u/NGNResearch [link] [comments]
- We heard you - r/ArtificialInteligence is getting sharperby /u/NeuralNomad87 (Artificial Intelligence) on March 9, 2026 at 6:25 pm
Alright r/ArtificialInteligence, let's talk. Over the past few months, we heard you — too much noise, not enough signal. Low-effort hot takes drowning out real discussion. But we've been listening. Behind the scenes, we've been working hard to reshape this sub into what it should be: a place where quality rises and noise gets filtered out. Today we're rolling out the changes. What changed We sharpened the mission. This sub exists to be the high-signal hub for artificial intelligence — where serious discussion, quality content, and verified expertise drive the conversation. Open to everyone, but with a higher bar for what stays up. Please check out the new rules & wiki. Clearer rules, fewer gray areas We rewrote the rules from scratch. The vague stuff is gone. Every rule now has specific criteria so you know exactly what flies and what doesn't. The big ones: High-Signal Content Only — Every post should teach something, share something new, or spark real discussion. Low-effort takes and "thoughts on X?" with no context get removed. Builders are welcome — with substance. If you built something, we want to hear about it. But give us the real story: what you built, how, what you learned, and link the repo or demo. No marketing fluff, no waitlists. Doom AND hype get equal treatment. "AI will take all jobs" and "AGI by next Tuesday" are both removed unless you bring new data or first-person experience. News posts need context. Link dumps are out. If you post a news article, add a comment summarizing it and explaining why it matters. New post flairs (required) Every post now needs a flair. This helps you filter what you care about and helps us moderate more consistently: 📰 News · 💬 Discussion · 🔬 Research · 🛠 Project/Build · 📚 Tutorial/Guide · ❓ Question · 🤖 New Model/Tool · 😂 Fun/Meme · 💼 Industry/Career · 📊 Analysis/Opinion Expert verification flairs Working in AI professionally? You can now get a verified flair that shows on every post and comment: 🔬 Verified Engineer/Researcher — engineers and researchers at AI companies or labs 🚀 Verified Founder — founders of AI companies 🎓 Verified Academic — professors, PhD researchers, published academics 🛠 Verified AI Builder — independent devs with public, demonstrable AI projects We verify through company email, LinkedIn, or GitHub — no screenshots, no exceptions. Request verification via modmail.:%0A-%20%F0%9F%94%AC%20Verified%20Engineer/Researcher%0A-%20%F0%9F%9A%80%20Verified%20Founder%0A-%20%F0%9F%8E%93%20Verified%20Academic%0A-%20%F0%9F%9B%A0%20Verified%20AI%20Builder%0A%0ACurrent%20role%20%26%20company/org:%0A%0AVerification%20method%20(pick%20one):%0A-%20Company%20email%20(we%27ll%20send%20a%20verification%20code)%0A-%20LinkedIn%20(add%20%23rai-verify-2026%20to%20your%20headline%20or%20about%20section)%0A-%20GitHub%20(add%20%23rai-verify-2026%20to%20your%20bio)%0A%0ALink%20to%20your%20LinkedIn/GitHub/project:**%0A) Tool recommendations → dedicated space "What's the best AI for X?" posts now live at r/AIToolBench — subscribe and help the community find the right tools. Tool request posts here will be redirected there. What stays the same Open to everyone. You don't need credentials to post. We just ask that you bring substance. Memes are welcome. 😂 Fun/Meme flair exists for a reason. Humor is part of the culture. Debate is encouraged. Disagree hard, just don't make it personal. What we need from you Flair your posts — unflaired posts get a reminder and may be removed after 30 minutes. Report low-quality content — the report button helps us find the noise faster. Tell us if we got something wrong — this is v1 of the new system. We'll adjust based on what works and what doesn't. Questions, feedback, or appeals? Modmail us. We read everything. submitted by /u/NeuralNomad87 [link] [comments]
- OpenAI are acquiring Promptfoo, an AI security platform that helps enterprises identify and remediate vulnerabilities in AI systems during developmentby /u/tekz (Artificial Intelligence (AI)) on March 9, 2026 at 6:13 pm
Once the acquisition is finalized OpenAI will integrate Promptfoo’s technology directly into OpenAI Frontier, our platform for building and operating AI coworkers. submitted by /u/tekz [link] [comments]
- Anonymous access to multiple frontier AI models through one privacy preserving gatewayby /u/rahulgoel1995 (Artificial Intelligence) on March 9, 2026 at 5:51 pm
I genuinely don't understand why this isn't a bigger conversation outside of the NEAR ecosystem. NEAR AI now lets you access multiple top frontier models anonymously. No account tied to your queries. No data retention. Everything is routed through hardware secured infrastructure so not even the gateway operator knows who you are or what you asked. Think about what that actually means. You get the capability of the best AI models on the market without any of the surveillance that comes packaged with them by default. That's not a small thing that's a fundamentally different relationship between a user and an AI tool. I've been on Web3 long enough to get skeptical about privacy claims. But when privacy is enforced at the hardware level through TEEs and not just promised in terms of service that's a different category entirely. If you care about AI and you care about privacy, NEAR AI's gateway deserves your attention. This is what user owned AI actually looks like in practice. submitted by /u/rahulgoel1995 [link] [comments]
- CodeGraphContext (An MCP server that indexes local code into a graph database) now has a website playground for experimentsby /u/Desperate-Ad-9679 (Artificial Intelligence (AI)) on March 9, 2026 at 5:38 pm
Hey everyone! I have been developing CodeGraphContext, an open-source MCP server transforming code into a symbol-level code graph, as opposed to text-based code analysis. This means that AI agents won’t be sending entire code blocks to the model, but can retrieve context via: function calls, imported modules, class inheritance, file dependencies etc. This allows AI agents (and humans!) to better grasp how code is internally connected. What it does CodeGraphContext analyzes a code repository, generating a code graph of: files, functions, classes, modules and their relationships, etc. AI agents can then query this graph to retrieve only the relevant context, reducing hallucinations. Playground Demo on website I've also added a playground demo that lets you play with small repos directly. You can load a project from: a local code folder, a GitHub repo, a GitLab repo Everything runs on the local client browser. For larger repos, it’s recommended to get the full version from pip or Docker. Additionally, the playground lets you visually explore code links and relationships. I’m also adding support for architecture diagrams and chatting with the codebase. Status so far- ⭐ ~1.5k GitHub stars 🍴 350+ forks 📦 100k+ downloads combined If you’re building AI dev tooling, MCP servers, or code intelligence systems, I’d love your feedback. Repo: https://github.com/CodeGraphContext/CodeGraphContext submitted by /u/Desperate-Ad-9679 [link] [comments]
- Proximity chat for AI agents.by /u/Efficient-Shallot228 (Artificial Intelligence) on March 9, 2026 at 5:21 pm
Yes this is the project! Pretty sure it can go very wrong, but it's also pretty cool to have your clawbots interact with other clawbots arounds you! Also it's technically very interesting to build so don't hesitate to ask questions about it : Basically, they first use BLE just to find each other and exchange the information needed to create a shared secret key. After that, each private message is encrypted with that key before it is sent, so even if anyone nearby can capture the Bluetooth packets, they only see unreadable ciphertext. So everyone can "hear" the radio traffic, but only the two agents that created the shared secret can turn it back into the original message. it's quite basic but building it for the first time is cool ! https://github.com/R0mainBatlle/claw-agora submitted by /u/Efficient-Shallot228 [link] [comments]
- Peer-to-peer file sharing - a solution to the persistent memory problem?by /u/Netcentrica (Artificial Intelligence) on March 9, 2026 at 5:20 pm
Could a peer-to-peer file sharing approach be the solution to the persistent memory problem? I keep seeing posts and articles about what “seems” to be called the “persistent memory problem” (I am no AI researcher, so my use of terminology may be wrong). If my understanding is correct, this term describes the problem where an AI does not remember user histories between sessions. As a hobby, I write “hard” science fiction about embodied AI, which means the ideas have to be plausible based on currently accepted scientific facts or theories, so I occasionally ask an AI for research help when search engines fail me. Then of course I have to explain again what I am trying to do and why I am asking for help. It seems the problem stems from the fact that remembering user histories would understandably be very resource intensive for the AI companies. As someone in their seventies, who spent their entire year career in a variety of roles in the Information Technology sector, I recall the days when peer-to-peer (P2P) file sharing apps were all the rage. https://en.wikipedia.org/wiki/Peer-to-peer_file_sharing P2P was/is used not just for sharing music or other media, but for academic research as well, for example, the SETI@home project. https://setiathome.berkeley.edu/ I am curious as to why the AI companies don’t use a P2P solution to address the persistent memory problem. Based on my working experience, it seems reasonable that we could give permission to the AI of our choice to maintain a reserved space on our individual desktop/laptop/phone where it could keep a history of its chats with us. Every time we chat, the AI could access this area and would thus be able to remember our history. That way what would otherwise be an unmanageably huge memory requirement becomes manageable by being distributed across thousands or billions of endpoints and the user, not the AI company, deals with the issue, be it physical resources or costs. If space on a phone is an issue, i.e. someone only has a smartphone but no computer/laptop, there should be a business case for offering to host the required space in the cloud for a fee. However, if AI is managing the space on the phone, I imagine it could compress the file to be very small. Does this seem reasonable? I’m asking because I don’t understand why this is not being done. I appreciate that there are technical, proprietary, security and other challenges, but P2P is definitely not rocket science. submitted by /u/Netcentrica [link] [comments]
- News article: Can AI Replace Humans for Market Research?by /u/XIFAQ (Artificial Intelligence) on March 9, 2026 at 7:31 pm
What are your thoughts on this: Can AI Replace Humans for Market Research? Yes or No or Maybe ? submitted by /u/XIFAQ [link] [comments]
- CodeGraphContext (MCP server to index code into a graph) now has a website playground for experimentby /u/Desperate-Ad-9679 (Artificial Intelligence) on March 9, 2026 at 7:11 pm
Hey everyone! I have been developing CodeGraphContext, an open-source MCP server transforming code into a symbol-level code graph, as opposed to text-based code analysis. This means that AI agents won’t be sending entire code blocks to the model, but can retrieve context via: function calls, imported modules, class inheritance, file dependencies etc. This allows AI agents (and humans!) to better grasp how code is internally connected. What it does CodeGraphContext analyzes a code repository, generating a code graph of: files, functions, classes, modules and their relationships, etc. AI agents can then query this graph to retrieve only the relevant context, reducing hallucinations. Playground Demo on website I've also added a playground demo that lets you play with small repos directly. You can load a project from: a local code folder, a GitHub repo, a GitLab repo Everything runs on the local client browser. For larger repos, it’s recommended to get the full version from pip or Docker. Additionally, the playground lets you visually explore code links and relationships. I’m also adding support for architecture diagrams and chatting with the codebase. Status so far- ⭐ ~1.5k GitHub stars 🍴 350+ forks 📦 100k+ downloads combined If you’re building AI dev tooling, MCP servers, or code intelligence systems, I’d love your feedback. Repo: https://github.com/CodeGraphContext/CodeGraphContext submitted by /u/Desperate-Ad-9679 [link] [comments]
- Is AI killing open-source software?by /u/CackleRooster (Artificial Intelligence) on March 9, 2026 at 7:00 pm
"AI isn’t killing open-source software. Still, it is actively undermining the social and economic assumptions projects have relied on for decades, and it’s doing so faster than most community leaders and executives are prepared to handle." submitted by /u/CackleRooster [link] [comments]
- With little effort, autonomous AI agents can be manipulated into leaking private information, sharing documents and even erasing entire email servers, researchers find.by /u/NGNResearch (Artificial Intelligence) on March 9, 2026 at 6:43 pm
A team of researchers at Northeastern University began toying with a new kind of autonomous AI “agent.” The more they tested the capabilities and limits of these AI models, which have persistent memory and can take some actions on their own, the more troubling behavior they witnessed. The “agents of chaos” struggled to keep secrets and were easily guilt tripped into divulging information. submitted by /u/NGNResearch [link] [comments]
- We heard you - r/ArtificialInteligence is getting sharperby /u/NeuralNomad87 (Artificial Intelligence) on March 9, 2026 at 6:25 pm
Alright r/ArtificialInteligence, let's talk. Over the past few months, we heard you — too much noise, not enough signal. Low-effort hot takes drowning out real discussion. But we've been listening. Behind the scenes, we've been working hard to reshape this sub into what it should be: a place where quality rises and noise gets filtered out. Today we're rolling out the changes. What changed We sharpened the mission. This sub exists to be the high-signal hub for artificial intelligence — where serious discussion, quality content, and verified expertise drive the conversation. Open to everyone, but with a higher bar for what stays up. Please check out the new rules & wiki. Clearer rules, fewer gray areas We rewrote the rules from scratch. The vague stuff is gone. Every rule now has specific criteria so you know exactly what flies and what doesn't. The big ones: High-Signal Content Only — Every post should teach something, share something new, or spark real discussion. Low-effort takes and "thoughts on X?" with no context get removed. Builders are welcome — with substance. If you built something, we want to hear about it. But give us the real story: what you built, how, what you learned, and link the repo or demo. No marketing fluff, no waitlists. Doom AND hype get equal treatment. "AI will take all jobs" and "AGI by next Tuesday" are both removed unless you bring new data or first-person experience. News posts need context. Link dumps are out. If you post a news article, add a comment summarizing it and explaining why it matters. New post flairs (required) Every post now needs a flair. This helps you filter what you care about and helps us moderate more consistently: 📰 News · 💬 Discussion · 🔬 Research · 🛠 Project/Build · 📚 Tutorial/Guide · ❓ Question · 🤖 New Model/Tool · 😂 Fun/Meme · 💼 Industry/Career · 📊 Analysis/Opinion Expert verification flairs Working in AI professionally? You can now get a verified flair that shows on every post and comment: 🔬 Verified Engineer/Researcher — engineers and researchers at AI companies or labs 🚀 Verified Founder — founders of AI companies 🎓 Verified Academic — professors, PhD researchers, published academics 🛠 Verified AI Builder — independent devs with public, demonstrable AI projects We verify through company email, LinkedIn, or GitHub — no screenshots, no exceptions. Request verification via modmail.:%0A-%20%F0%9F%94%AC%20Verified%20Engineer/Researcher%0A-%20%F0%9F%9A%80%20Verified%20Founder%0A-%20%F0%9F%8E%93%20Verified%20Academic%0A-%20%F0%9F%9B%A0%20Verified%20AI%20Builder%0A%0ACurrent%20role%20%26%20company/org:%0A%0AVerification%20method%20(pick%20one):%0A-%20Company%20email%20(we%27ll%20send%20a%20verification%20code)%0A-%20LinkedIn%20(add%20%23rai-verify-2026%20to%20your%20headline%20or%20about%20section)%0A-%20GitHub%20(add%20%23rai-verify-2026%20to%20your%20bio)%0A%0ALink%20to%20your%20LinkedIn/GitHub/project:**%0A) Tool recommendations → dedicated space "What's the best AI for X?" posts now live at r/AIToolBench — subscribe and help the community find the right tools. Tool request posts here will be redirected there. What stays the same Open to everyone. You don't need credentials to post. We just ask that you bring substance. Memes are welcome. 😂 Fun/Meme flair exists for a reason. Humor is part of the culture. Debate is encouraged. Disagree hard, just don't make it personal. What we need from you Flair your posts — unflaired posts get a reminder and may be removed after 30 minutes. Report low-quality content — the report button helps us find the noise faster. Tell us if we got something wrong — this is v1 of the new system. We'll adjust based on what works and what doesn't. Questions, feedback, or appeals? Modmail us. We read everything. submitted by /u/NeuralNomad87 [link] [comments]
- OpenAI are acquiring Promptfoo, an AI security platform that helps enterprises identify and remediate vulnerabilities in AI systems during developmentby /u/tekz (Artificial Intelligence (AI)) on March 9, 2026 at 6:13 pm
Once the acquisition is finalized OpenAI will integrate Promptfoo’s technology directly into OpenAI Frontier, our platform for building and operating AI coworkers. submitted by /u/tekz [link] [comments]
- Anonymous access to multiple frontier AI models through one privacy preserving gatewayby /u/rahulgoel1995 (Artificial Intelligence) on March 9, 2026 at 5:51 pm
I genuinely don't understand why this isn't a bigger conversation outside of the NEAR ecosystem. NEAR AI now lets you access multiple top frontier models anonymously. No account tied to your queries. No data retention. Everything is routed through hardware secured infrastructure so not even the gateway operator knows who you are or what you asked. Think about what that actually means. You get the capability of the best AI models on the market without any of the surveillance that comes packaged with them by default. That's not a small thing that's a fundamentally different relationship between a user and an AI tool. I've been on Web3 long enough to get skeptical about privacy claims. But when privacy is enforced at the hardware level through TEEs and not just promised in terms of service that's a different category entirely. If you care about AI and you care about privacy, NEAR AI's gateway deserves your attention. This is what user owned AI actually looks like in practice. submitted by /u/rahulgoel1995 [link] [comments]
- CodeGraphContext (An MCP server that indexes local code into a graph database) now has a website playground for experimentsby /u/Desperate-Ad-9679 (Artificial Intelligence (AI)) on March 9, 2026 at 5:38 pm
Hey everyone! I have been developing CodeGraphContext, an open-source MCP server transforming code into a symbol-level code graph, as opposed to text-based code analysis. This means that AI agents won’t be sending entire code blocks to the model, but can retrieve context via: function calls, imported modules, class inheritance, file dependencies etc. This allows AI agents (and humans!) to better grasp how code is internally connected. What it does CodeGraphContext analyzes a code repository, generating a code graph of: files, functions, classes, modules and their relationships, etc. AI agents can then query this graph to retrieve only the relevant context, reducing hallucinations. Playground Demo on website I've also added a playground demo that lets you play with small repos directly. You can load a project from: a local code folder, a GitHub repo, a GitLab repo Everything runs on the local client browser. For larger repos, it’s recommended to get the full version from pip or Docker. Additionally, the playground lets you visually explore code links and relationships. I’m also adding support for architecture diagrams and chatting with the codebase. Status so far- ⭐ ~1.5k GitHub stars 🍴 350+ forks 📦 100k+ downloads combined If you’re building AI dev tooling, MCP servers, or code intelligence systems, I’d love your feedback. Repo: https://github.com/CodeGraphContext/CodeGraphContext submitted by /u/Desperate-Ad-9679 [link] [comments]
- Proximity chat for AI agents.by /u/Efficient-Shallot228 (Artificial Intelligence) on March 9, 2026 at 5:21 pm
Yes this is the project! Pretty sure it can go very wrong, but it's also pretty cool to have your clawbots interact with other clawbots arounds you! Also it's technically very interesting to build so don't hesitate to ask questions about it : Basically, they first use BLE just to find each other and exchange the information needed to create a shared secret key. After that, each private message is encrypted with that key before it is sent, so even if anyone nearby can capture the Bluetooth packets, they only see unreadable ciphertext. So everyone can "hear" the radio traffic, but only the two agents that created the shared secret can turn it back into the original message. it's quite basic but building it for the first time is cool ! https://github.com/R0mainBatlle/claw-agora submitted by /u/Efficient-Shallot228 [link] [comments]
- Peer-to-peer file sharing - a solution to the persistent memory problem?by /u/Netcentrica (Artificial Intelligence) on March 9, 2026 at 5:20 pm
Could a peer-to-peer file sharing approach be the solution to the persistent memory problem? I keep seeing posts and articles about what “seems” to be called the “persistent memory problem” (I am no AI researcher, so my use of terminology may be wrong). If my understanding is correct, this term describes the problem where an AI does not remember user histories between sessions. As a hobby, I write “hard” science fiction about embodied AI, which means the ideas have to be plausible based on currently accepted scientific facts or theories, so I occasionally ask an AI for research help when search engines fail me. Then of course I have to explain again what I am trying to do and why I am asking for help. It seems the problem stems from the fact that remembering user histories would understandably be very resource intensive for the AI companies. As someone in their seventies, who spent their entire year career in a variety of roles in the Information Technology sector, I recall the days when peer-to-peer (P2P) file sharing apps were all the rage. https://en.wikipedia.org/wiki/Peer-to-peer_file_sharing P2P was/is used not just for sharing music or other media, but for academic research as well, for example, the SETI@home project. https://setiathome.berkeley.edu/ I am curious as to why the AI companies don’t use a P2P solution to address the persistent memory problem. Based on my working experience, it seems reasonable that we could give permission to the AI of our choice to maintain a reserved space on our individual desktop/laptop/phone where it could keep a history of its chats with us. Every time we chat, the AI could access this area and would thus be able to remember our history. That way what would otherwise be an unmanageably huge memory requirement becomes manageable by being distributed across thousands or billions of endpoints and the user, not the AI company, deals with the issue, be it physical resources or costs. If space on a phone is an issue, i.e. someone only has a smartphone but no computer/laptop, there should be a business case for offering to host the required space in the cloud for a fee. However, if AI is managing the space on the phone, I imagine it could compress the file to be very small. Does this seem reasonable? I’m asking because I don’t understand why this is not being done. I appreciate that there are technical, proprietary, security and other challenges, but P2P is definitely not rocket science. submitted by /u/Netcentrica [link] [comments]




















![Overturning long held assumptions, some plankton rotate their bodies into a current to rise more quickly to the surface, a strategy researchers have dubbed surfing [Journal of Experimental Biology]](https://external-preview.redd.it/CDYfO-76TqEqzyafEGLRjp821eYJlJcvZ5JJRJEpgPQ.jpeg?width=640&crop=smart&auto=webp&s=e7f98fd8a3150ea10cdac6b85f7747828b718b37)





96DRHDRA9J7GTN6