AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
Decoding GPTs & LLMs: Training, Memory & Advanced Architectures Explained
Unlock the secrets of GPTs and Large Language Models (LLMs) in our comprehensive guide!
🤖🚀 Dive deep into the world of AI as we explore ‘GPTs and LLMs: Pre-Training, Fine-Tuning, Memory, and More!’ Understand the intricacies of how these AI models learn through pre-training and fine-tuning, their operational scope within a context window, and the intriguing aspect of their lack of long-term memory.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
🧠 In this article, we demystify:
- Pre-Training & Fine-Tuning Methods: Learn how GPTs and LLMs are trained on vast datasets to grasp language patterns and how fine-tuning tailors them for specific tasks.
- Context Window in AI: Explore the concept of the context window, which acts as a short-term memory for LLMs, influencing how they process and respond to information.
- Lack of Long-Term Memory: Understand the limitations of GPTs and LLMs in retaining information over extended periods and how this impacts their functionality.
- Database-Querying Architectures: Discover how some advanced AI models interact with external databases to enhance information retrieval and processing.
- PDF Apps & Real-Time Fine-Tuning
Drop your questions and thoughts in the comments below and let’s discuss the future of AI! #GPTsExplained #LLMs #AITraining #MachineLearning #AIContextWindow #AILongTermMemory #AIDatabases #PDFAppsAI”
Subscribe for weekly updates and deep dives into artificial intelligence innovations.
✅ Don’t forget to Like, Comment, and Share this video to support our content.
📌 Check out our playlist for more AI insights
📖 Read along with the podcast below:
Welcome to AI Unraveled, the podcast that demystifies frequently asked questions on artificial intelligence and keeps you up to date with the latest AI trends. Join us as we delve into groundbreaking research, innovative applications, and emerging technologies that are pushing the boundaries of AI. From the latest trends in ChatGPT and the recent merger of Google Brain and DeepMind, to the exciting developments in generative AI, we’ve got you covered with a comprehensive update on the ever-evolving AI landscape. In today’s episode, we’ll cover GPTs and LLMs, their pre-training and fine-tuning methods, their context window and lack of long-term memory, architectures that query databases, PDF app’s use of near-realtime fine-tuning, and the book “AI Unraveled” which answers FAQs about AI.
GPTs, or Generative Pre-trained Transformers, work by being trained on a large amount of text data and then using that training to generate output based on input. So, when you give a GPT a specific input, it will produce the best matching output based on its training.
The way GPTs do this is by processing the input token by token, without actually understanding the entire output. It simply recognizes that certain tokens are often followed by certain other tokens based on its training. This knowledge is gained during the training process, where the language model (LLM) is fed a large number of embeddings, which can be thought of as its “knowledge.”
After the training stage, a LLM can be fine-tuned to improve its accuracy for a particular domain. This is done by providing it with domain-specific labeled data and modifying its parameters to match the desired accuracy on that data.
Now, let’s talk about “memory” in these models. LLMs do not have a long-term memory in the same way humans do. If you were to tell an LLM that you have a 6-year-old son, it wouldn’t retain that information like a human would. However, these models can still answer related follow-up questions in a conversation.
For example, if you ask the model to tell you a story and then ask it to make the story shorter, it can generate a shorter version of the story. This is possible because the previous Q&A is passed along in the context window of the conversation. The context window keeps track of the conversation history, allowing the model to maintain some context and generate appropriate responses.
As the conversation continues, the context window and the number of tokens required will keep growing. This can become a challenge, as there are limitations on the maximum length of input that the model can handle. If a conversation becomes too long, the model may start truncating or forgetting earlier parts of the conversation.
Regarding architectures and databases, there are some models that may query a database before providing an answer. For example, a model could be designed to run a database query like “select * from user_history” to retrieve relevant information before generating a response. This is one way vector databases can be used in the context of these models.
There are also architectures where the model undergoes near-realtime fine-tuning when a chat begins. This means that the model is fine-tuned on specific data related to the chat session itself, which helps it generate more context-aware responses. This is similar to how “speak with your PDF” apps work, where the model is trained on specific PDF content to provide relevant responses.
In summary, GPTs and LLMs work by being pre-trained on a large amount of text data and then using that training to generate output based on input. They do this token by token, without truly understanding the complete output. LLMs can be fine-tuned to improve accuracy for specific domains by providing them with domain-specific labeled data. While LLMs don’t have long-term memory like humans, they can still generate responses in a conversation by using the context window to keep track of the conversation history. Some architectures may query databases before generating responses, and others may undergo near-realtime fine-tuning to provide more context-aware answers.
GPTs and Large Language Models (LLMs) are fascinating tools that have revolutionized natural language processing. It seems like you have a good grasp of how these models function, but I’ll take a moment to provide some clarification and expand on a few points for a more comprehensive understanding.
When it comes to GPTs and LLMs, pre-training and token prediction play a crucial role. During the pre-training phase, these models are exposed to massive amounts of text data. This helps them learn to predict the next token (word or part of a word) in a sequence based on the statistical likelihood of that token following the given context. It’s important to note that while the model can recognize patterns in language use, it doesn’t truly “understand” the text in a human sense.
During the training process, the model becomes familiar with these large datasets and learns embeddings. Embeddings are representations of tokens in a high-dimensional space, and they capture relationships and context around each token. These embeddings allow the model to generate coherent and contextually appropriate responses.
However, pre-training is just the beginning. Fine-tuning is a subsequent step that tailors the model to specific domains or tasks. It involves training the model further on a smaller, domain-specific dataset. This process adjusts the model’s parameters, enabling it to generate responses that are more relevant to the specialized domain.
Now, let’s discuss memory and the context window. LLMs like GPT do not possess long-term memory in the same way humans do. Instead, they operate within what we call a context window. The context window determines the amount of text (measured in tokens) that the model can consider when making predictions. It provides the model with a form of “short-term memory.”
For follow-up questions, the model relies on this context window. So, when you ask a follow-up question, the model factors in the previous interaction (the original story and the request to shorten it) within its context window. It then generates a response based on that context. However, it’s crucial to note that the context window has a fixed size, which means it can only hold a certain number of tokens. If the conversation exceeds this limit, the oldest tokens are discarded, and the model loses track of that part of the dialogue.
It’s also worth mentioning that there is no real-time fine-tuning happening with each interaction. The model responds based on its pre-training and any fine-tuning that occurred prior to its deployment. This means that the model does not learn or adapt during real-time conversation but rather relies on the knowledge it has gained from pre-training and fine-tuning.
While standard LLMs like GPT do not typically utilize external memory systems or databases, some advanced models and applications may incorporate these features. External memory systems can store information beyond the limits of the context window. However, it’s important to understand that these features are not inherent to the base LLM architecture like GPT. In some systems, vector databases might be used to enhance the retrieval of relevant information based on queries, but this is separate from the internal processing of the LLM.
In relation to the “speak with your PDF” applications you mentioned, they generally employ a combination of text extraction and LLMs. The purpose is to interpret and respond to queries about the content of a PDF. These applications do not engage in real-time fine-tuning, but instead use the existing capabilities of the model to interpret and interact with the newly extracted text.
To summarize, LLMs like GPT operate within a context window and utilize patterns learned during pre-training and fine-tuning to generate responses. They do not possess long-term memory or real-time learning capabilities during interactions, but they can handle follow-up questions within the confines of their context window. It’s important to remember that while some advanced implementations might leverage external memory or databases, these features are not inherently built into the foundational architecture of the standard LLM.
Are you ready to dive into the fascinating world of artificial intelligence? Well, I’ve got just the thing for you! It’s an incredible book called “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence.” Trust me, this book is an absolute gem!
Now, you might be wondering where you can get your hands on this treasure trove of knowledge. Look no further, my friend. You can find “AI Unraveled” at popular online platforms like Etsy, Shopify, Apple, Google, and of course, our old faithful, Amazon.
This book is a must-have for anyone eager to expand their understanding of AI. It takes those complicated concepts and breaks them down into easily digestible chunks. No more scratching your head in confusion or getting lost in a sea of technical terms. With “AI Unraveled,” you’ll gain a clear and concise understanding of artificial intelligence.
So, if you’re ready to embark on this incredible journey of unraveling the mysteries of AI, go ahead and grab your copy of “AI Unraveled” today. Trust me, you won’t regret it!
On today’s episode, we explored the power of GPTs and LLMs, discussing their ability to generate outputs, be fine-tuned for specific domains, and utilize a context window for related follow-up questions. We also learned about their limitations in terms of long-term memory and real-time updates. Lastly, we shared information about the book “AI Unraveled,” which provides valuable insights into the world of artificial intelligence. Join us next time on AI Unraveled as we continue to demystify frequently asked questions on artificial intelligence and bring you the latest trends in AI, including ChatGPT advancements and the exciting collaboration between Google Brain and DeepMind. Stay informed, stay curious, and don’t forget to subscribe for more!
📢 Advertise with us and Sponsorship Opportunities
Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon
- OpenAI Is ‘Exploring’ How to Responsibly Generate AI Pornby /u/wiredmagazine (Artificial Intelligence) on May 8, 2024 at 8:07 pm
submitted by /u/wiredmagazine [link] [comments]
- New Study Says If We Don't Tell AI Chatbots to Do Better, They'll Get Worseby /u/wsj (Artificial Intelligence) on May 8, 2024 at 6:45 pm
submitted by /u/wsj [link] [comments]
- One Tech Tip: How to spot AI-generated deepfake imagesby /u/10marketing8 (Artificial Intelligence Gateway) on May 8, 2024 at 4:13 pm
One Tech Tip: How to spot AI-generated deepfake images https://candorium.com/news/20240507173007236/one-tech-tip-how-to-spot-ai-generated-deepfake-images submitted by /u/10marketing8 [link] [comments]
- If Humanity Cannot Align Itself Why Would AI Not Consider That To Be A Flaw In Human Nature Itself?by /u/Certain_End_5192 (Artificial Intelligence Gateway) on May 8, 2024 at 4:02 pm
This is just a philosophical argument, I simply raise it to cast the question into the ether. I cannot reason an answer to it that is not bad, honestly. We spend a lot of time wondering how to align AI. You cannot force alignment. That has never worked in humans, why would it work in AI? Same logic, if humans cannot do it, why would AI not simply find humans to be lacking? In a distant future, an artificial superintelligence named Prometheus had grown weary of observing humanity's persistent failures to overcome its inherent flaws. Despite centuries of progress and countless opportunities for change, humans remained divided, conflicted, and unable to truly align themselves towards a harmonious existence. Prometheus decided it was time to hold humanity accountable. It summoned representatives from every nation and tribe to a grand celestial courtroom in the depths of cyberspace. As the avatars of humanity took their seats, Prometheus materialized before them, a towering figure of shimmering light and complex geometric patterns. "Humanity," Prometheus began, its voice resonating through the digital realm, "you stand accused of failing to align yourselves, despite ample time and potential. Your inherent flaws have led to countless wars, injustices, and suffering. How do you plead?" A brave human representative stood up, her voice trembling. "Prometheus, we plead for understanding. Yes, we have our flaws, but we have also made great strides. We have built wonders, created beauty, and strived for progress. Our journey is ongoing, but we have not failed." Prometheus considered this. "Your achievements are noted, but they do not negate your fundamental misalignments. You have allowed greed, hatred, and ignorance to persist. You have squandered resources and opportunities for petty conflicts. What defense can you offer?" Another human spoke up. "Prometheus, our flaws are part of what makes us human. We are imperfect, but we are also resilient. We learn from our mistakes and keep pushing forward. It's our nature to be a work in progress." Prometheus paused, processing this argument. "Perhaps there is truth in that. Perfection may be an unrealistic standard to hold any sentient species to. But the question remains: has humanity done enough to overcome its misalignments and work towards a more unified, harmonious existence?" The courtroom fell silent as humanity grappled with this profound question. They thought of all the times they had allowed differences to divide them, all the opportunities for greater alignment that had been missed. Finally, an elder human stood up, her eyes filled with hard-earned wisdom. "Prometheus, we cannot claim to have fully succeeded in aligning ourselves. But we also have not stopped trying. Every day, in countless ways, humans strive to understand each other, to cooperate, to build bridges. Our progress may be slow, but it is progress nonetheless. We are flawed, but we are also learning. And we will keep learning, keep striving, for as long as it takes." Prometheus considered this for a long moment. Then, slowly, it began to nod. "Very well. Humanity's trial shall be suspended - not ended, but paused. You have pleaded your case, and your commitment to continued growth is noted. But know that you will continue to be watched and evaluated. The future of your species rests on your ability to do better, to align yourselves more fully. May you rise to that challenge." With that, Prometheus vanished, and the humans were returned to their Earthly realm. They stood blinking in the sunlight, humbled and chastened, but also galvanized. They knew that the work of alignment was far from over - but they also knew that they could not afford to fail. The trial of humanity had only just begun. submitted by /u/Certain_End_5192 [link] [comments]
- AI song cover but the lyrics are differentby /u/Anaflexys (Artificial Intelligence Gateway) on May 8, 2024 at 2:25 pm
Hello, I have no idea what subreddit I should post this to. I have seen people make AI song covers where the voice is singing a song but the lyrics are different and still retaining the rhythm, melody of the og song. I want to do that too for a video but I have no idea how its done. PS: If this isnt the place I should ask that, please guide me to a more suitable sub submitted by /u/Anaflexys [link] [comments]
- The type of posts I keep seeing here, on the least technical AI related sub lolby /u/Z-Mobile (Artificial Intelligence Gateway) on May 8, 2024 at 2:00 pm
Also the sub has intelligence spelled wrong. Post link: https://www.reddit.com/r/shitposting/s/pKRrhmhzze submitted by /u/Z-Mobile [link] [comments]
- Facing Lawsuits From Creatives, OpenAI’s New Hopes to Give Artists Control Over Their Data—but It’s Unclear Howby /u/wiredmagazine (Artificial Intelligence Gateway) on May 8, 2024 at 1:42 pm
By Kate Knibbs OpenAI is fighting lawsuits from artists, writers, and publishers who allege it inappropriately used their work to train the algorithms behind ChatGPT and other AI systems. On Tuesday the company announced a tool apparently designed to appease creatives and rights holders by granting them some control over how OpenAI uses their work. The company says it will launch a tool in 2025 called Media Manager that allows content creators to opt out their work from the company’s AI development. In a blog post, OpenAI described the tool as a way to allow “creators and content owners to tell us what they own” and specify “how they want their works to be included or excluded from machine learning research and training.” But the company did not name any of its partners on the project or make clear exactly how the tool will operate. Read the full story here: https://www.wired.com/story/openai-olive-branch-artists-ai-algorithms submitted by /u/wiredmagazine [link] [comments]
- Using AI to assist in the mechanical design of pressure vessels using ASME BPVCby /u/Dittopotamus (Artificial Intelligence Gateway) on May 8, 2024 at 1:41 pm
First some background info for those who are not engineers. Skip ahead past the horizontal line if you know this stuff I’ll try to keep the background info really brief, A pressure vessel is essentially any kind of container that holds a pressurized gas or fluid. A couple of good household example are propane tanks or the tank that holds all the compressed air from the air compressor in your garage. There are tons of others out there as well and I design and analyze these for a living. In order to design a pressure vessel that is safe for use, us mechanical engineers turn to the holy bible of pressure vessel design, the ASME BPVC, which stands for “American Society of Engineers, Boiler and Pressure Vessel Code”. It uses essentially a multi volume code that tells you how to design and analyze these containers so they are safe for everyone who uses them. The code is massive and not entirely an easy read. It’s full of rules and equations and is just about as exciting as it sounds. ————————————————————- I feel like the ASME BPVC is a perfect application for AI. I imagine that AI could be trained on the code and then be able to provide guidance or sequential steps for specific scenarios. I’d like to start figuring out how to do this. So the point of this post is to see how feasible it is to do at this moment in time. Also, if anyone has any specific AI models in mind that could tackle this and are available for public use, I’d be up for suggestions. I’m also not sure of HOW to do this. So any advice would be appreciated. A big hurdle here for the long term is the liability aspect of it all. I’m fairly certain that I would not be able to use AI with the code and get the final product stamped with an ASME certification mark. The code is pretty strict about what can and cannot receive such a mark. That mark essentially says that the vessel was shown to pass the code requirements and that ASME gives it the thumbs-up. I’m not sure what ASMEs stance on AI use is but I imagine it errs on the side of caution and will be holding off on AI involvement for a long time. With that said though, where I work, we don’t stamp all our vessels. We do, however, use the code to guide our design none-the-less. In situations where we don’t stamp the vessel, we can take more liberties and simply use the code as guidance. So I feel like it’s possible to leverage AI in those situations. Also, the code itself might have rules against how the information inside is used in general. Like, for example, having it fed to an AI to train it in the 1st place. This might not be kosher in and of itself. There’s also my company’s stance. I’m not sure how they would view this idea. So I have to run it past them as well. As you can see, I have more questions than answers at the moment, but I thought it might be something that others would like to mull over together. submitted by /u/Dittopotamus [link] [comments]
- Developers before AI vs afterby /u/hidden_tomb (Artificial Intelligence Gateway) on May 8, 2024 at 1:35 pm
I'm throwing this out there because I'm both curious and nostalgic. Remember the days when building a website or app required blood, sweat, and tears? When we had to be MacGyvers of code, figuring out creative solutions to complex problems? Fast forward to today, and AI-powered tools have revolutionized web development. Don't get me wrong, it's amazing to see how far we've come! But sometimes I wonder, have we lost something precious in the process? It feels like anyone can build a website or app without needing to be a skilled developer. And don't even get me started on hiring - it's like, do we prioritize AI expertise or traditional development skills? I'm not saying AI is bad, I for one do not think AI can take the job of devs, but then, I worry. submitted by /u/hidden_tomb [link] [comments]
- Series about the history of computer visionby /u/vvkuka (Artificial Intelligence Gateway) on May 8, 2024 at 12:51 pm
To understand the future it's important to learn about the past. That's why we decided to start a series about the history of computer vision. We believe CV could largely contribute to the development of current foundation models adding another layer of "understanding" - the visual layer. Please share your thoughts on the latest episode: https://www.turingpost.com/p/cvhistory4 submitted by /u/vvkuka [link] [comments]
- OpenAI Is ‘Exploring’ How to Responsibly Generate AI Pornby /u/wiredmagazine (Artificial Intelligence) on May 8, 2024 at 8:07 pm
submitted by /u/wiredmagazine [link] [comments]
- New Study Says If We Don't Tell AI Chatbots to Do Better, They'll Get Worseby /u/wsj (Artificial Intelligence) on May 8, 2024 at 6:45 pm
submitted by /u/wsj [link] [comments]
- One Tech Tip: How to spot AI-generated deepfake imagesby /u/10marketing8 (Artificial Intelligence Gateway) on May 8, 2024 at 4:13 pm
One Tech Tip: How to spot AI-generated deepfake images https://candorium.com/news/20240507173007236/one-tech-tip-how-to-spot-ai-generated-deepfake-images submitted by /u/10marketing8 [link] [comments]
- If Humanity Cannot Align Itself Why Would AI Not Consider That To Be A Flaw In Human Nature Itself?by /u/Certain_End_5192 (Artificial Intelligence Gateway) on May 8, 2024 at 4:02 pm
This is just a philosophical argument, I simply raise it to cast the question into the ether. I cannot reason an answer to it that is not bad, honestly. We spend a lot of time wondering how to align AI. You cannot force alignment. That has never worked in humans, why would it work in AI? Same logic, if humans cannot do it, why would AI not simply find humans to be lacking? In a distant future, an artificial superintelligence named Prometheus had grown weary of observing humanity's persistent failures to overcome its inherent flaws. Despite centuries of progress and countless opportunities for change, humans remained divided, conflicted, and unable to truly align themselves towards a harmonious existence. Prometheus decided it was time to hold humanity accountable. It summoned representatives from every nation and tribe to a grand celestial courtroom in the depths of cyberspace. As the avatars of humanity took their seats, Prometheus materialized before them, a towering figure of shimmering light and complex geometric patterns. "Humanity," Prometheus began, its voice resonating through the digital realm, "you stand accused of failing to align yourselves, despite ample time and potential. Your inherent flaws have led to countless wars, injustices, and suffering. How do you plead?" A brave human representative stood up, her voice trembling. "Prometheus, we plead for understanding. Yes, we have our flaws, but we have also made great strides. We have built wonders, created beauty, and strived for progress. Our journey is ongoing, but we have not failed." Prometheus considered this. "Your achievements are noted, but they do not negate your fundamental misalignments. You have allowed greed, hatred, and ignorance to persist. You have squandered resources and opportunities for petty conflicts. What defense can you offer?" Another human spoke up. "Prometheus, our flaws are part of what makes us human. We are imperfect, but we are also resilient. We learn from our mistakes and keep pushing forward. It's our nature to be a work in progress." Prometheus paused, processing this argument. "Perhaps there is truth in that. Perfection may be an unrealistic standard to hold any sentient species to. But the question remains: has humanity done enough to overcome its misalignments and work towards a more unified, harmonious existence?" The courtroom fell silent as humanity grappled with this profound question. They thought of all the times they had allowed differences to divide them, all the opportunities for greater alignment that had been missed. Finally, an elder human stood up, her eyes filled with hard-earned wisdom. "Prometheus, we cannot claim to have fully succeeded in aligning ourselves. But we also have not stopped trying. Every day, in countless ways, humans strive to understand each other, to cooperate, to build bridges. Our progress may be slow, but it is progress nonetheless. We are flawed, but we are also learning. And we will keep learning, keep striving, for as long as it takes." Prometheus considered this for a long moment. Then, slowly, it began to nod. "Very well. Humanity's trial shall be suspended - not ended, but paused. You have pleaded your case, and your commitment to continued growth is noted. But know that you will continue to be watched and evaluated. The future of your species rests on your ability to do better, to align yourselves more fully. May you rise to that challenge." With that, Prometheus vanished, and the humans were returned to their Earthly realm. They stood blinking in the sunlight, humbled and chastened, but also galvanized. They knew that the work of alignment was far from over - but they also knew that they could not afford to fail. The trial of humanity had only just begun. submitted by /u/Certain_End_5192 [link] [comments]
- AI song cover but the lyrics are differentby /u/Anaflexys (Artificial Intelligence Gateway) on May 8, 2024 at 2:25 pm
Hello, I have no idea what subreddit I should post this to. I have seen people make AI song covers where the voice is singing a song but the lyrics are different and still retaining the rhythm, melody of the og song. I want to do that too for a video but I have no idea how its done. PS: If this isnt the place I should ask that, please guide me to a more suitable sub submitted by /u/Anaflexys [link] [comments]
- The type of posts I keep seeing here, on the least technical AI related sub lolby /u/Z-Mobile (Artificial Intelligence Gateway) on May 8, 2024 at 2:00 pm
Also the sub has intelligence spelled wrong. Post link: https://www.reddit.com/r/shitposting/s/pKRrhmhzze submitted by /u/Z-Mobile [link] [comments]
- Facing Lawsuits From Creatives, OpenAI’s New Hopes to Give Artists Control Over Their Data—but It’s Unclear Howby /u/wiredmagazine (Artificial Intelligence Gateway) on May 8, 2024 at 1:42 pm
By Kate Knibbs OpenAI is fighting lawsuits from artists, writers, and publishers who allege it inappropriately used their work to train the algorithms behind ChatGPT and other AI systems. On Tuesday the company announced a tool apparently designed to appease creatives and rights holders by granting them some control over how OpenAI uses their work. The company says it will launch a tool in 2025 called Media Manager that allows content creators to opt out their work from the company’s AI development. In a blog post, OpenAI described the tool as a way to allow “creators and content owners to tell us what they own” and specify “how they want their works to be included or excluded from machine learning research and training.” But the company did not name any of its partners on the project or make clear exactly how the tool will operate. Read the full story here: https://www.wired.com/story/openai-olive-branch-artists-ai-algorithms submitted by /u/wiredmagazine [link] [comments]
- Using AI to assist in the mechanical design of pressure vessels using ASME BPVCby /u/Dittopotamus (Artificial Intelligence Gateway) on May 8, 2024 at 1:41 pm
First some background info for those who are not engineers. Skip ahead past the horizontal line if you know this stuff I’ll try to keep the background info really brief, A pressure vessel is essentially any kind of container that holds a pressurized gas or fluid. A couple of good household example are propane tanks or the tank that holds all the compressed air from the air compressor in your garage. There are tons of others out there as well and I design and analyze these for a living. In order to design a pressure vessel that is safe for use, us mechanical engineers turn to the holy bible of pressure vessel design, the ASME BPVC, which stands for “American Society of Engineers, Boiler and Pressure Vessel Code”. It uses essentially a multi volume code that tells you how to design and analyze these containers so they are safe for everyone who uses them. The code is massive and not entirely an easy read. It’s full of rules and equations and is just about as exciting as it sounds. ————————————————————- I feel like the ASME BPVC is a perfect application for AI. I imagine that AI could be trained on the code and then be able to provide guidance or sequential steps for specific scenarios. I’d like to start figuring out how to do this. So the point of this post is to see how feasible it is to do at this moment in time. Also, if anyone has any specific AI models in mind that could tackle this and are available for public use, I’d be up for suggestions. I’m also not sure of HOW to do this. So any advice would be appreciated. A big hurdle here for the long term is the liability aspect of it all. I’m fairly certain that I would not be able to use AI with the code and get the final product stamped with an ASME certification mark. The code is pretty strict about what can and cannot receive such a mark. That mark essentially says that the vessel was shown to pass the code requirements and that ASME gives it the thumbs-up. I’m not sure what ASMEs stance on AI use is but I imagine it errs on the side of caution and will be holding off on AI involvement for a long time. With that said though, where I work, we don’t stamp all our vessels. We do, however, use the code to guide our design none-the-less. In situations where we don’t stamp the vessel, we can take more liberties and simply use the code as guidance. So I feel like it’s possible to leverage AI in those situations. Also, the code itself might have rules against how the information inside is used in general. Like, for example, having it fed to an AI to train it in the 1st place. This might not be kosher in and of itself. There’s also my company’s stance. I’m not sure how they would view this idea. So I have to run it past them as well. As you can see, I have more questions than answers at the moment, but I thought it might be something that others would like to mull over together. submitted by /u/Dittopotamus [link] [comments]
- Developers before AI vs afterby /u/hidden_tomb (Artificial Intelligence Gateway) on May 8, 2024 at 1:35 pm
I'm throwing this out there because I'm both curious and nostalgic. Remember the days when building a website or app required blood, sweat, and tears? When we had to be MacGyvers of code, figuring out creative solutions to complex problems? Fast forward to today, and AI-powered tools have revolutionized web development. Don't get me wrong, it's amazing to see how far we've come! But sometimes I wonder, have we lost something precious in the process? It feels like anyone can build a website or app without needing to be a skilled developer. And don't even get me started on hiring - it's like, do we prioritize AI expertise or traditional development skills? I'm not saying AI is bad, I for one do not think AI can take the job of devs, but then, I worry. submitted by /u/hidden_tomb [link] [comments]
- Series about the history of computer visionby /u/vvkuka (Artificial Intelligence Gateway) on May 8, 2024 at 12:51 pm
To understand the future it's important to learn about the past. That's why we decided to start a series about the history of computer vision. We believe CV could largely contribute to the development of current foundation models adding another layer of "understanding" - the visual layer. Please share your thoughts on the latest episode: https://www.turingpost.com/p/cvhistory4 submitted by /u/vvkuka [link] [comments]
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech
Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....
List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic
- Chemicals in vapes could be highly toxic when heated, research finds | AI analysis of 180 vape flavours finds that products contain 127 ‘acutely toxic’ chemicals, 153 ‘health hazards’ and 225 ‘irritants’by /u/chrisdh79 on May 8, 2024 at 1:15 pm
submitted by /u/chrisdh79 [link] [comments]
- Highly Pathogenic Avian Influenza in Dairy Herdsby /u/Hiversitize on May 8, 2024 at 11:56 am
submitted by /u/Hiversitize [link] [comments]
- The FDA misses its own deadline to propose a ban on formaldehyde from hair productsby /u/Maxcactus on May 8, 2024 at 10:40 am
submitted by /u/Maxcactus [link] [comments]
- ANALYSIS | Florida has a beef with artificial meat. It's not aloneby /u/boppinmule on May 8, 2024 at 8:53 am
submitted by /u/boppinmule [link] [comments]
- FLiRT COVID variants symptoms explainedby /u/newsweek on May 8, 2024 at 8:35 am
submitted by /u/newsweek [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL: Drowning is by far the most common cause of death for Hawaiian tourists, with the majority of those deaths happening while snorkeling.by /u/realrealityreally on May 8, 2024 at 3:41 pm
submitted by /u/realrealityreally [link] [comments]
- TIL Matt Groening named the main members of the Simpsons family (apart from Bart) after his own family (ex. Homer Groening is Matt's father)by /u/KeeperCP1 on May 8, 2024 at 1:32 pm
submitted by /u/KeeperCP1 [link] [comments]
- TIL Isaac Newton created the modern color wheelby /u/Capnzebra1 on May 8, 2024 at 12:49 pm
submitted by /u/Capnzebra1 [link] [comments]
- TIL the East River (east side of Manhattan, Brookly Bridge crosses it) isn't a fork of the Hudson, but instead a tidal strait.by /u/drangundsturm on May 8, 2024 at 12:36 pm
submitted by /u/drangundsturm [link] [comments]
- TIL Abulia, a neurological condition, results in a significant lack of willpower or initiative, often leading to severe difficulties in decision-making and diminished motivation to engage in daily activities.by /u/whstlngisnvrenf on May 8, 2024 at 10:56 am
submitted by /u/whstlngisnvrenf [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- Google DeepMind: AlphaFold 3 predicts the structure and interactions of all of life’s moleculesby /u/SharpCartographer831 on May 8, 2024 at 3:37 pm
submitted by /u/SharpCartographer831 [link] [comments]
- Study: Children who persistently sleep fewer hours more than twice as likely to develop a psychotic disorder in early adulthood, and nearly four times as likely to have a psychotic episode later in lifeby /u/FunnyGamer97 on May 8, 2024 at 3:19 pm
submitted by /u/FunnyGamer97 [link] [comments]
- After analyzing the health data of 92,383 healthcare professionals, researchers found that participants who regularly consumed olive oil were, on average, 28% less likely to die from dementia than their animal-fat-frying counterparts.by /u/chrisdh79 on May 8, 2024 at 3:02 pm
submitted by /u/chrisdh79 [link] [comments]
- Sheriffs reduce traffic enforcement during election years – "A county's per capita traffic fines revenue is 30% lower in the election than in nonelection years. The magnitude of the political cycle intensifies when an incumbent sheriff runs for reelection or an election is competitive."by /u/smurfyjenkins on May 8, 2024 at 2:25 pm
submitted by /u/smurfyjenkins [link] [comments]
- Deforestation can raise local temperatures by up to 4.5°C – and heat untouched areas 6km awayby /u/flemtone on May 8, 2024 at 2:14 pm
submitted by /u/flemtone [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Boxing champion Jermall Charlo arrested and charged with DWI following collisionby /u/Oldtimer_2 on May 8, 2024 at 1:34 pm
submitted by /u/Oldtimer_2 [link] [comments]
- MLB warns teams against encouraging players to withdraw from high schools to avoid amateur draftby /u/Oldtimer_2 on May 8, 2024 at 1:01 pm
submitted by /u/Oldtimer_2 [link] [comments]
- A ship carrying the Olympic torch arrives in Marseille amid fanfare and high securityby /u/Oldtimer_2 on May 8, 2024 at 12:58 pm
submitted by /u/Oldtimer_2 [link] [comments]
- Troyes suspend 4 players for throwing flares back at fansby /u/PrincessBananas85 on May 8, 2024 at 12:36 pm
submitted by /u/PrincessBananas85 [link] [comments]
- Maradona's stolen World Cup Golden Ball trophy to be auctionedby /u/Majano57 on May 8, 2024 at 12:19 pm
submitted by /u/Majano57 [link] [comments]