Elevate Your Career with AI & Machine Learning For Dummies PRO
Ready to accelerate your career in the fast-growing fields of AI and machine learning? Our app offers user-friendly tutorials and interactive exercises designed to boost your skills and make you stand out to employers. Whether you're aiming for a promotion or searching for a better job, AI & Machine Learning For Dummies PRO is your gateway to success. Start mastering the technologies shaping the future—download now and take the next step in your professional journey!
Download the AI & Machine Learning For Dummies PRO App:
iOS - Android
Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
- AWS Certified AI Practitioner (AIF-C01): Conquer the AWS Certified AI Practitioner exam with our AI and Machine Learning For Dummies test prep. Master fundamental AI concepts, AWS AI services, and ethical considerations.
- Azure AI Fundamentals: Ace the Azure AI Fundamentals exam with our comprehensive test prep. Learn the basics of AI, Azure AI services, and their applications.
- Google Cloud Professional Machine Learning Engineer: Nail the Google Professional Machine Learning Engineer exam with our expert-designed test prep. Deepen your understanding of ML algorithms, models, and deployment strategies.
- AWS Certified Machine Learning Specialty: Dominate the AWS Certified Machine Learning Specialty exam with our targeted test prep. Master advanced ML techniques, AWS ML services, and practical applications.
- AWS Certified Data Engineer Associate (DEA-C01): Set yourself up for promotion, get a better job or Increase your salary by Acing the AWS DEA-C01 Certification.
What is Google answer to ChatGPT?
Have you ever heard of ChatGPT, the open-source machine learning platform that allows users to build natural language models?
It stands for “Chat Generating Pre-trained Transformer” and it’s an AI-powered chatbot that can answer questions with near human-level intelligence. But what is Google’s answer to this technology? The answer lies in Open AI, supervised learning, and reinforcement learning. Let’s take a closer look at how these technologies work.
Open AI is an artificial intelligence research laboratory that was founded by some of the biggest names in tech, including Elon Musk and Sam Altman. This non-profit organization seeks to develop general artificial intelligence that is safe and beneficial to society. One of their key initiatives is the development of open source technologies like GPT-3, which is a natural language processing model used in ChatGPT.
ChatGPT: What Is It and How Does Google Answer It?
Artificial Intelligence (AI) has been around for decades. From its humble beginnings in the 1950s, AI has come a long way and is now an integral part of many aspects of our lives. One of the most important areas where AI plays a role is in natural language processing (NLP). NLP enables computers to understand and respond to human language, paving the way for more advanced conversations between humans and machines. One of the most recent developments in this field is ChatGPT, a conversational AI developed by OpenAI that utilizes supervised learning and reinforcement learning to enable computers to chat with humans. So what exactly is ChatGPT and how does it work? Let’s find out!
ChatGPT is an open-source AI-based chatbot developed by OpenAI.
This chatbot leverages GPT-3, one of the most powerful natural language processing models ever created, which stands for Generative Pre-trained Transformer 3 (GPT-3). This model uses supervised learning and reinforcement learning techniques to enable computers to understand human language and response accordingly. Using supervised learning, GPT-3 utilizes large datasets of text to learn how to recognize patterns within language that can be used to generate meaningful responses. Reinforcement learning then allows GPT-3 to use feedback from conversations with humans in order to optimize its responses over time.
ChatGPT uses supervised learning techniques to train its models.
Supervised learning involves providing a model with labeled data (i.e., data with known outcomes) so that it can learn from it. This labeled data could be anything from conversations between two people to user comments on a website or forum post. The model then learns associations between certain words or phrases and the desired outcome (or label). Once trained, this model can then be applied to new data in order to predict outcomes based on what it has learned so far.
In addition to supervised learning techniques, ChatGPT also supports reinforcement learning algorithms which allow the model to learn from its experiences in an environment without explicit labels or outcomes being provided by humans. Reinforcement learning algorithms are great for tasks like natural language generation where the output needs to be generated by the model itself rather than simply predicting a fixed outcome based on existing labels.
Supervised Learning
Supervised learning involves feeding data into machine learning algorithms so they can learn from it. For example, if you want a computer program to recognize cats in pictures, you would provide the algorithm with thousands of pictures of cats so it can learn what a cat looks like. This same concept applies to natural language processing; supervised learning algorithms are fed data sets so they can learn how to generate text using contextual understanding and grammar rules.
Reinforcement Learning
Reinforcement learning uses rewards and punishments as incentives for the machine learning algorithm to explore different possibilities. In ChatGPT’s case, its algorithm is rewarded for generating more accurate responses based on previous interactions with humans. By using reinforcement learning techniques, ChatGPT’s algorithm can become smarter over time as it learns from its mistakes and adjusts accordingly as needed.
How is ChatGPT trained?
ChatGPT is an improved GPT-3 trained an existing reinforcement learning with humans in the loop. Their 40 labelers provide demonstrations of the desired model behavior. ChatGPT has 100x fewer parameters (1.3B vs 175B GPT-3).
It is trained in 3 steps:
➡️ First they collect a dataset of human-written demonstrations on prompts submitted to our API, and use this to train our supervised learning baselines.
➡️ Next they collect a dataset of human-labeled comparisons between two model outputs on a larger set of API prompts. They then train a reward model (RM) on this dataset to predict which output our labelers would prefer.
➡️ Finally, they use this RM as a reward function and fine-tune our GPT-3 policy to maximize this reward using the Proximal Policy Optimization
Set yourself up for promotion or get a better job by Acing the AWS Certified Data Engineer Associate Exam (DEA-C01) with the eBook or App below (Data and AI)
Download the Ace AWS DEA-C01 Exam App:
iOS - Android
AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version
In simpler terms, ChatGPT is a variant of the GPT-3 language model that is specifically designed for chat applications. It is trained to generate human-like responses to natural language inputs in a conversational context. It is able to maintain coherence and consistency in a conversation, and can even generate responses that are appropriate for a given context. ChatGPT is a powerful tool for creating chatbots and other conversational AI applications.
How Does Google Answer ChatGPT?
Google’s answer to ChatGTP comes in the form of their own conversational AI platform called Bard. Bard was developed using a combination of supervised learning, unsupervised learning, and reinforcement learning algorithms that allow it to understand human conversation better than any other AI chatbot currently available on the market. In addition, Meena utilizes more than 2 billion parameters—making it more than three times larger than GPT-3—which allows it greater flexibility when responding to conversations with humans.
“We’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We’re beginning with the U.S. and the U.K., and will expand to more countries and languages over time.”
Is ChatGPT the End of Google?
When individuals need an information or have a problem/concern, they turn to Google for immediate solution. We sometimes wish, Google could understand what exactly we need and provide us instantly rather than giving us hundreds of thousands of results. Why can’t it work like the Iron Man’s Jarvis?
However, it is not that far now. Have you ever seen a Chat Bot which responds like a human being, suggest or help like a friend, teach like a mentor, fix your code like a senior and what not? It is going to blow your mind.
Welcome to the new Era of technology!! The ChatGPT!
ChatGPT by OpenAI, uses artificial intelligence to speak back and forth with human users on a wide range of subjects. Deploying a machine-learning algorithm, the chatbot scans text across the internet and develops a statistical model that allows it to string words together in response to a given prompt.
As per OpenAI, ChatGPT interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
What all ChatGPT can do?
- It can help with general knowledge information.
- Remember what user said in previous conversation.
- Allow users to provide follow-up corrections.
- Trained to decline inappropriate requests.
- It can write a program in any language you prefer on real-time. for example — write classification code sample in sklearn python library.
- It can fix your piece of code and also explain what went wrong and how it can be fixed.
- It can even generate song or rap lyrics
- Even much more….
Some best usages of ChatGPT:
- Make a diet and workout plan
- Generate the next week’s meals with a grocery list
- Create a bedtime story for kids
- Prep for an interview
- Solve mathematical problem
- Fix software program or write a program
- Plan your trip and tell expected expenses
What are its limitations of ChatGPT?
- May occasionally generate incorrect information
- May occasionally produce harmful instructions or biased content
- Limited knowledge of world and events after 2021
ChatGPT is in its baby steps therefore it may answer erroneously at times however it’s manner of response will blow your mind. Some users have also extolled the chatbot as a potential alternative search engine, since it generates detailed information instantly on a range of topics. I believe, we can’t compare Google with ChatGPT as ChatGPT can provide more in-depth and nuanced answers to complex questions than a search engine like Google, which is designed to provide a list of relevant web pages in response to a user’s query.
Try ChatGPT here
Conclusion:
ChatGPT is an increasingly popular open source AI chatbot developed by OpenAI using GTP-3 natural language processing technology that utilizes supervised learning and reinforcement learning algorithms for recognition and optimization purposes respectively. On the other hand, Google’s answer comes in the form of their own conversational AI platform called Bard which employs both supervised learning as well as unsupervised learning algorithms along with reinforcement learning capabilities which makes it even smarter than ChatGPT when responding to conversations with humans. Ultimately, these two technologies help bridge the gap between machines and humans by allowing us all talk like real friends!
ChatGPT is in its baby steps therefore it may answer erroneously at times however it’s manner of response will blow your mind. Some users have also extolled the chatbot as a potential alternative search engine, since it generates detailed information instantly on a range of topics. I believe, we can’t compare Google with ChatGPT as ChatGPT can provide more in-depth and nuanced answers to complex questions than a search engine like Google, which is designed to provide a list of relevant web pages in response to a user’s query.
With each passing day, ChatGPT will only get smarter thanks to the use of cutting edge technologies like OpenAI, supervised learning, and reinforcement learning!
ChatGPT Conversation Examples
Question: What is Google Answer to ChatGPT?
Answer: Google does not have an answer to ChatGPT. ChatGPT is an open-source chatbot created by OpenAI, which is an independent research organization that is not affiliated with Google. OpenAI is known for its research on artificial intelligence and machine learning, and has developed a number of innovative technologies in these areas. Google, on the other hand, is a multinational technology company that offers a wide range of products and services, including search, advertising, and cloud computing. While Google does have its own research efforts in artificial intelligence and machine learning, it does not have a chatbot that is similar to ChatGPT.
Question: is Google working on a similar technology as ChatPGT?
Answer:
It is possible that Google is working on technology similar to ChatGPT, as the company has a significant research effort focused on artificial intelligence and machine learning. However, I do not have any specific information about what projects Google is currently working on, and it is also important to note that Google is a large and diverse company with many different research and development efforts underway. It is possible that some teams within Google may be working on chatbot technology, but I do not have any information on this.
That being said, chatbots are a common application of artificial intelligence and machine learning, and it is likely that many companies and organizations, including Google, are exploring ways to use these technologies to improve their products and services. However, without more information, it is difficult to say for sure what projects Google may be working on in this area.
Question: What about Google Meena?
Google Meena is a chatbot developed by Google that was designed to be able to carry on conversations with humans in a natural and conversational manner. Meena was trained on a dataset of more than 2.4 billion parameters, making it one of the largest language models ever created. The goal of Meena was to improve the state of the art in chatbot technology and to create a chatbot that could understand and respond to a wide range of questions and topics in a coherent and meaningful way.
Meena was announced in January 2020 and received attention for its ability to hold relatively long and coherent conversations with humans. However, it is important to note that Meena is a research project and is not a product or service offered by Google. It is not clear if or when Google plans to release a commercial product or service based on the technology developed for Meena.
It gets even better when you ask it about the future:
Suggest possible titles for Onion articles written in 4000 AD, explain, why they are funny
“Futuristic city plagued by traffic jams, despite flying cars” – This title presents the idea of flying cars as a solution to traffic congestion, but also suggests that even with this advanced technology, the city is still plagued by traffic jams, implying that the problem is not easily solved.
“Artificial intelligence achieves self-awareness, immediately regrets it” – This title presents the idea of artificial intelligence achieving self-awareness as a major milestone, but also suggests that the AI regrets its newfound consciousness, implying that it is unhappy or uncomfortable with its situation.
“Alien invasion successfully repulsed, humans left to deal with paperwork” – This title presents the successful repulsion of an alien invasion as a major accomplishment, but also suggests that the humans are now left to deal with the tedious and burdensome task of filling out paperwork, implying that the victory is not as satisfying as it initially seemed.
“Time travel becomes possible, tourists flock to witness the signing of the Constitution” – This title presents the idea of time travel as a major technological breakthrough, but also suggests that the most popular destination for time-travelers is the mundane and uneventful signing of the Constitution, implying that the future is not as exciting as people had hoped.
“Virtual reality becomes indistinguishable from reality, no one notices” – This title presents the idea of virtual reality becoming indistinguishable from real life as a major accomplishment, but also suggests that no one cares or notices the difference, implying that the advancement is not as impressive or significant as it seems.
New slogan for McDonald’s by ChatGPT
Should… should we be concerned?
I asked ChatGPT to write a poem about rust
ChatGPT chooses Democrats over Republicans
- It is not intended to do that
- If you choose to do that, you will have a lot of badly broken code mixed in with working code
- Like an annoying coworker, it delivers highly confident incorrect explanations about why its broken code is perfect. They sound very convincing. “Wanna buy a timeshare?” says GPT
- Our industry has managers who cannot tell working code from defective code. This does not bode well for a team replaced by ChatGPT in its current form.
Should it? No.
Can it? No.
Will it? Sadly, programmers will have no say in this matter, once again. It might.
Isn’t Stackoverflow advertising ChatGPT when it bans it and then making numerous posts about why it banned it? By Alan Mellor
Yes, and it is very helpful advertising as well.
This last week or so has seen starry eyed projections about what ChatGPT can do, along with hugely impressive examples of its output.
It is hugely impressive.
Thankfully, more output examples have emerged which helpfully show what it cannot do. One of those things is writing computer code, which it can do only partially successfully. Many examples now exist that are just plain wrong and contain defects. But ChatGPT – like the annoying kid at Uni – cheerfully spits out these examples, with its over-confident hubris in explaining the code.
This is a dangerous thing. The positive examples will reinforce the idea that we can leave code writing to this robot now. The people most vulnerable to this delusion are those who cannot assess for themselves whether the GPT code is right or wrong.
These are almost by definition the people hoping for answers on stack overflow.
As stack overflow aims to be a high quality resource, it really does not want many of its top answers to be incorrect code. As – clearly – people have been running scripts that throw a stack overflow question into GPT and upload its output, we can now write incorrect code at staggering speeds.
To err is human, as the old saying goes. To truly foul up requires a Python script and and a web API to both GPT and Stack overflow.
Clearly, there is value in GPT. But at least for now, it needs to b e kept on a very short leash, watched over by those who know what they are doing.
It is definitely not yet ‘consumer grade replace-a-professional’ material.
Write a screenplay about the status of ChatGPT.
More about ChatGPT with its wonder, worry and weird
ChatGPT reached 1 million users in less than a week, Open AI’s latest large language model (LLM) has taken the AI industry by storm.
ChatGPT is expected to be:
– replacing Google search, even kill Google.
– replacing customer service agents.
– replacing conversation designers.
ChatGPT is a wonder because:
– It can have actual conversations, understand pronouns, remaining consistent, remembering, managing context
– It seems like next generation of personal assistants that finds you a proper diet, create a meal plan and subsequent shopping list.
– It can create some SEO Strategy including backlinks, target keyword, content plan and article titles in the level of an SEO professional.
– Having fun such as writing a rap in the style of Eminem
There are some worries about ChatGPT because:
– ChatGPT can actually debug code, but it’s not quite reliable enough yet.
– Fundamental limitations in being assistant for enterprise use cases.
– No complete in complex actions such as updating multiple
APIs, or be fully auditable.
– The general idea is that, LLMs like this can produce nonsense. Once you discover that it can produce nonsense, you stop believing it to be reliable.
– What if it prevents us from knowing that it is nonsense with good conversations and continue the conversation?
– In this case, the edges and limitations of the system would be hidden and trust would eventually grow.
– The impact of mass adoption of such technology remains to be seen.
Moving forward with ChatGPT
– There’s no doubt that LLMs will have a big impact on our world.
– While the future looks exciting and promising, let’s not forget that it’s very early days with these things. They’re not ready yet.
– There are some fundamental societal and ethical considerations.
How powerful is OpenAI’s new GPT-3 deep learning model? By
“Powerful” is a pretty subjective word, but I’m pretty sure we have a right to use it to describe GPT-3. What a sensation it caused in June 2020, that’s just unbelievable! And not for nothing.
I think we can’t judge how powerful the language model is, without talking about its use cases, so let’s see how and where GPT-3 can be applied and how you can benefit from it.
- Generating content
GPT-3 positions itself as a highly versatile and talented tool that can potentially replace writers, bloggers, philosophers, you name it! It’s also possible to use it as your personal Alexa who’ll answer any questions you have. What’s more, because GPT-3 knows how to analyze the data and make predictions, it can generate the horoscopes for you, or predict who’ll be a winner in the game.
You may already be surprised by all the GPT-3 capabilities, but hold on for more: it can create a unique melody or song for you, create presentations, CVs, generate jokes for your standup.
- Translation
GPT-3 can translate English into other languages. While traditional dictionaries provide a translation, without taking into account the context, you can be sure that GPT-3 won’t make silly mistakes that may result in misunderstanding.
- Designing and developing apps
Using GPT-3 you can generate prototypes and layouts – all you have to do is provide a specific description of what you need, and it’ll generate the JSX code for you.
The language model can also easily deal with coding. You can turn English to CSS, to JavaScript, to SQL, and to regex. It’s important to note, however, that GPT-3 can’t be used on its own to create the entire website or a complex app; it’s meant to assist a developer or the whole engineering team with the routine tasks, so that a dev could focus on the infrastructure setup, architecture development, etc.
In September 2020, Microsoft acquired OpenAI technology license, but it doesn’t mean you can give up your dreams – you can join a waitlist and try GPT-3 out in beta.
All in all, I believe GPT-3 capabilities are truly amazing and limitless, and since it helps get rid of routine tasks and automate regular processes, we, humans, can focus on the most important things that make us human, and that can’t be delegated to AI. That’s the power that GPT-3 can give us.
What does ChatGPT give incorrect and unreliable results to simple arithmetic problems (e.g. it gave me three different incorrect answers to 13345*6748)? We’ve had software that can accurately do arithmetic for decades, so why can’t an advanced AI? By Richard Morris
What is remarkable is how well ChatGPT actually does at arithmetic.
In this video at about 11 min, Rob Mills discusses the performance of various versions of the GPT system, on some simple arithmetic tasks, like adding two and three-digit numbers.
Smaller models with 6 billion parameters fail at 2 digit sums, but the best model (from two years ago), has cracked 2 digit addition and subtraction and is pretty good at 3 digit addition.
Why this is remarkable is this is not a job its been trained to do. Large Language Models are basically predictive text systems set up to give the next word in an incomplete sentence. There are a million different 3-digit addition sums and most have not been included in the training set.
So somehow the system has figured out how to do addition, but it needs a sufficiently large model to do this.
ChatGPT to save time with insurance denials
Tech Buzzwords of 2022, By Google Search Interest
What is the future of web development after ChatGPT? Will programmers lose their jobs? By Tim Mensch
I just answered a similar question.
Short answer is, “Hahahahahahaha no.”
As I point out in the other answer, Wix has been around over a decade and a half. Squarespace has been around almost two decades. Both offer drag-and-drop web development.
Most people are awful at imagining what they want, much less describing it in English! Even if ChatGPT could produce flawless code (a question which has a similar short answer), the average person couldn’t describe the site they wanted!
The expression a picture is worth a thousand words has never been more relevant. Starting with pages of templates to choose from is so much better than trying to describe a site from scratch, a thousand times better seems like a low estimate.
And I will point out that, despite the existence of drag-and-drop tools that literally any idiot could use, tools that are a thousand times or more easier to use correctly than English, there are still thousands of employed WordPress developers who predominantly create boilerplate sites that literally would be better created in a drag and drop service.
And then there are the more complex sites that drag-and-drop couldn’t create. Guess what? ChatGPT isn’t likely to come close to being able to create the correct code for one.
In a discussion buried in the comments on Quora, I saw someone claim they’d gotten ChatGPT to load a CSV file (a simple text version of a spreadsheet) and to sort the first column. He asked for the answer in Java.
I asked ChatGPT for the same thing in TypeScript.
His response would only have worked on the very most basic CSV files. My response was garbage. Garbage with clear text comments telling me what the code should have been doing, no less.
ChatGPT is really good at what it does, don’t get me wrong. But what it does is fundamentally and profoundly the wrong strategy for software development of any type. Anyone who thinks that “with a little more work” it will be able to take over the jobs of programmers either doesn’t understand what ChatGPT is doing or doesn’t understand what programming is.
Fundamentally, ChatGPT is a magic trick. It understands nothing. At best it’s an idiot-savant that only knows how to pattern match and blend text it’s found online to make it seem like the text should go together. That’s it.
Text, I might add, that isn’t necessarily free of copyright protection. Anything non-trivial that you generate with ChatGPT is currently in a legal grey area. Lawsuits to decide that issue are currently pending, though I suspect we’ll need legislation to really clarify things.
And even then, at best, all you get from ChatGPT is some text! What average Joe will have any clue about what to do with that text?! Web developers also need to know how to set up a development environment and deploy the code to a site. And set up a domain to point to it. And so on.
And regardless, people who hire web developers want someone else to do the work of developing a web site. Even with a drag-and-drop builder, it can take hours to tweak and configure a site, and so they hire someone because they have better things to do!
People hire gardeners to maintain their garden and cut their grass, right? Is that because they don’t know how to do it? Or because they’d rather spend their time doing something else?
Every way you look at it, the best answer to this question is a long, hearty laugh. No AI will replace programmers until AI has effectively human level intelligence. And at that point they may want equal pay as well, so they might just be joining us rather than replacing anyone.
How does OpenAI approach the development of artificial intelligence?
OpenAI is a leading research institute and technology company focused on artificial intelligence development. To develop AI, the organization employs a variety of methods, including machine learning, deep learning, and reinforcement learning.
The use of large-scale, unsupervised learning is one of the key principles underlying OpenAI’s approach to AI development. This means that the company trains its AI models on massive datasets, allowing the models to learn from the data and make predictions and decisions without having to be explicitly programmed to do so. OpenAI’s goal with unsupervised learning is to create AI that can adapt and improve over time, and that can learn to solve complex problems in a more flexible and human-like manner.
Besides that, OpenAI prioritizes safety and transparency in its AI development. The organization is committed to developing AI in an ethical and responsible manner, as well as to ensuring that its AI systems are transparent and understandable and verifiable by humans. This strategy is intended to alleviate concerns about the potential risks and consequences of AI, as well.
How valid is OpenAI chief scientist’s claim that advanced artificial intelligence may already be conscious? By Steve Baker
It’s hard to tell.
The reason is that we don’t have a good definition of consciousness…nor even a particularly good test for it.
Take a look at the Wikipedia article about “Consciousness”. To quote the introduction:
Consciousness, at its simplest, is sentience or awareness of internal and external existence.
Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial, being “at once the most familiar and [also the] most mysterious aspect of our lives”.
Perhaps the only widely agreed notion about the topic is the intuition that consciousness exists.
Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one’s “inner life”, the world of introspection, of private thought, imagination and volition.
Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. There might be different levels or orders of consciousness, or different kinds of consciousness, or just one kind with different features.
Other questions include whether only humans are conscious, all animals, or even the whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.
So, given that – what are we to make of OpenAI’s claim?
Just this sentence: “Today, it often includes any kind of cognition, experience, feeling or perception.” could be taken to imply that anything that has cognition or perception is conscious…and that would certainly include a HUGE range of software.
If we can’t decide whether animals are conscious – after half a million years of interactions with them – what chance do we stand with an AI?
Wikipedia also says:
“Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition.”
Same deal – we don’t have a definition of consciousness – so how the hell can we measure it – and if we can’t do that – is it even meaningful to ASK whether an AI is conscious?
- if ( askedAboutConsciousness )
- printf ( “Yes! I am fully conscious!\n” ) ;
This is not convincing!
“In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent.”
But, again, we have “chat-bots” that exhibit “verbal behavior”, we have computers that exhibit arousal and neural network software that definitely shows “brain activity” and of course things like my crappy robot vacuum cleaner that can exhibit “purposeful movement” – but these can be fairly simple things that most of us would NOT describe as “conscious”.
CONCLUSION:
I honestly can’t come up with a proper conclusion here. We have a fuzzy definition of a word and an inadequately explained claim to have an instance of something that could be included within that word.
My suggestion – read the whole Wikipedia article – follow up (and read) some of the reference material – decide for yourself.
Should we be scared of ChatGPT? by Alexander L.
Well, I asked it directly.
Here’s what it answered:
But, seeing as how people have already found ways to “trick” ChatGPT into doing things that it claims to not be capable of, it would be a matter of time before someone with malicious intent tricked ChatGPT into helping them with illegal activities
What is the future of web development after ChatGPT? Will programmers lose their jobs? By Victor T. Toth
Having looked at ChatGPT and its uncanny ability to solve simple coding problems more or less correctly, and also to analyze and make sense of not-so-simple code fragments and spot bugs…
I would say that yes, at least insofar as entry-level programming is concerned, those jobs are seriously in danger of becoming at least partially automated.
What do I do as a project leader of a development project? I assign tasks. I talk to the junior developer and explain, for instance, that I’d like to see a Web page that collects some information from the user and then submits it to a server, with server-side code processing that information and dropping it in a database. Does the junior developer understand my explanation? Is he able to write functionally correct code? Will he recognize common pitfalls? Maybe, maybe not. But it takes time and effort to train him, and there’ll be a lot of uneven performance.
Today, I can ask ChatGPT to do the same and it will instantaneously respond with code that is nearly functional. The code has shortcomings (e.g., prone to SQL injection in one of the examples I tried) but to its credit, ChatGPT warns in its response that its code is not secure. I suppose it would not be terribly hard to train it some more to avoid such common mistakes. Of course the code may not be correct. ChatGPT may have misunderstood my instructions or introduced subtle errors. But how is that different from what a junior human programmer does?
At the same time, ChatGPT is much faster and costs a lot less to run (presently free of course but I presume a commercialized version would cost some money.) Also, it never takes a break, never has a lousy day struggling with a bad hangover from too much partying the previous night, so it is available 24/7, and it will deliver code of consistent quality. Supervision will still be required, in the form of code review, robust testing and all… but that was always the case, also with human programmers.
Of course, being a stateless large language model, ChatGPT can’t do other tasks such as testing and debugging its own code. The code it produces either works or it doesn’t. In its current form, the AI does not learn from its mistakes. But who says it cannot in the future?
Here is a list of three specific examples I threw at ChatGPT that helped shape my opinion:
- I asked ChatGPT to create a PHP page that collects some information from the user and deposits the result in a MySQL table. Its implementation was textbook example level boring and was quite unsecure (unsanitized user input was directly inserted into SQL query strings) but it correctly understood my request, produced correct code in return, and explained its code including its shortcomings coherently;
- I asked ChatGPT to analyze a piece of code I wrote many years ago, about 30 lines, enumerating running processes on a Linux host in a nonstandard way, to help uncover nefarious processes that attempt to hide themselves from being listed by the ps utility. ChatGPT correctly described the functionality of my obscure code, and even offered the opinion (which I humbly accepted) that it was basically a homebrew project (which it is) not necessarily suitable for a production environment;
- I asked ChatGPT to analyze another piece of code that uses an obscure graphics algorithm to draw simple geometric shapes like lines and circles without using floating point math or even multiplication. (Such algorithms were essential decades ago on simple hardware, e.g., back in the world of 8-bit computers.) The example code, which I wrote, generated a circle and printed it on the console in the form of ASCII graphics, multiple lines with X-es in the right place representing the circle. ChatGPT correctly recognized the algorithm and correctly described the functionality of the program.
I was especially impressed by its ability to make sense of the programmer’s intent.
Overall (to use the catch phrase that ChatGPT preferably uses as it begins its concluding paragraph in many of its answers) I think AI like ChatGPT represents a serious challenge to entry-level programming jobs. Higher-level jobs are not yet in danger. Conceptually understanding a complex system, mapping out a solution, planning and cosing out a project, managing its development, ensuring its security with a full understanding of security concerns, responsibilities, avoidance and mitigation strategies… I don’t think AI is quite there yet. But routine programming tasks, like using a Web template and turning it into something simple and interactive with back-end code that stores and retrieves data from a database? Looks like it’s already happening.
How much was invested to create the GPT-3?
According to the estimate of Lambda Labs, training the 175-billion-parameter neural network requires 3.114E23 FLOPS (floating-point operation), which would theoretically take 355 years on a V100 GPU server with 28 TFLOPS capacity and would cost $4.6 million at $1.5 per hour.
Training the final deep learning model is just one of several steps in the development of GPT-3. Before that, the AI researchers had to gradually increase layers and parameters, and fiddle with the many hyperparameters of the language model until they reached the right configuration. That trial-and-error gets more and more expensive as the neural network grows.
We can’t know the exact cost of the research without more information from OpenAI, but one expert estimated it to be somewhere between 1.5 and five times the cost of training the final model.
This would put the cost of research and development between $11.5 million and $27.6 million, plus the overhead of parallel GPUs.
In the GPT-3 whitepaper, OpenAI introduced eight different versions of the language model
GPT-3 is not any AI, but a statistic language model which mindlessly quickly creates human-like written text using machine learning technologies, having zero understanding of the context.
Here are 8 ways ChatGPT can save you thousand of hours in 2023
1- Substitute for google search
While ChatGPT is lacking info beyond 2021 and is occasionally incorrect and bias, many users leverage its ability to:
- Answer specific questions
- simplify complicated topics
All with an added bonus – no ads
2- Study Partner
Type “learn”, then paste a a link to your online textbook (or individual chapters).
Ask Chatbot to provide questions based on your textbook.
Boom.
Now you have a virtual study buddy.
3- Train YOUR OWN Chatbot
I bet you didn’t know it is possible to :
- Integrate ChatGPT into your website
- Train it with customized information
The result:
A virtual customer service bot that can hold a conversation and answer questions (meaningfully).
4- Counsellor
When it comes to turbulent personal questions, Chatbot may spit out a disclaimer, but it will also give you straightforward and actionable advice.
5- Coding
ChatGPT is opening the development of:
- Apps
- Games
- Websites
to virtually everyone.
It’s a lengthy and technical process, but all you need is a killer idea and the right prompts.
Bonus: It also de-bugs your existing code for you.
6- Outline your content marketing strategy
7- Craft all your marketing materials
8- Creative Writing
A list for those who write code:
1. Explaining code: Take some code you want to understand and ask ChatGPT to explain it.
2. Improve existing code: Ask ChatGPT to improve existing code by describing what you want to accomplish. It will give you instructions about how to do it, including the modified code.
3. Rewriting code using the correct style: This is great when refactoring code written by non-native Python developers who used a different naming convention. ChatGPT not only gives you the updated code; it also explains the reason for the changes.
4. Rewriting code using idiomatic constructs: Very helpful when reviewing and refactoring code written by non-native Python developers.
5. Simplifying code: Ask ChatGPT to simplify complex code. The result will be a much more compact version of the original code.
6. Writing test cases: Ask it to help you test a function, and it will write test cases for you.
7. Exploring alternatives: ChatGPT told me its Quick Sort implementation wasn’t the most efficient, so I asked for an alternative implementation. This is great when you want to explore different ways to accomplish the same thing.
8. Writing documentation: Ask ChatGPT to write the documentation for a piece of code, and it usually does a great job. It even includes usage examples as part of the documentation!
9. Tracking down bugs: If you are having trouble finding a bug in your code, ask ChatGPT for help.
Something to keep in mind:
I have 2+ decades of programming experience. I like to think I know what I’m doing. I don’t trust people’s code (especially mine,) and I surely don’t trust ChatGPT’s output.
This is not about letting ChatGPT do my work. This is about using it to 10x my output.
ChatGPT is flawed. I find it makes mistakes when dealing with code, but that’s why I’m here: to supervise it. Together we form a more perfect Union. (Sorry, couldn’t help it)
Developers who shit on this are missing the point. The story is not about ChatGPT taking programmers’ jobs. It’s not about a missing import here or a subtle mistake there.
The story is how, overnight, AI gives programmers a 100x boost.
Ignore this at your own peril.
ChatGPT is “simply” a fined-tuned GPT-3 model with a surprisingly small amount of data! Moreover, InstructGPT (ChatGPT’s sibling model) seems to be using 1.3B parameters where GPT-3 uses 175B parameters! It is first fine-tuned with supervised learning and then further fine-tuned with reinforcement learning. They hired 40 human labelers to generate the training data. Let’s dig into it!
– First, they started by a pre-trained GPT-3 model trained on a broad distribution of Internet data (https://arxiv.org/pdf/2005.14165.pdf). Then sampled typical human prompts used for GPT collected from the OpenAI website and asked labelers and customers to write down the correct output. They fine-tuned the model with 12,725 labeled data.
– Then, they sampled human prompts and generated multiple outputs from the model. A labeler is then asked to rank those outputs. The resulting data is used to train a Reward model (https://arxiv.org/pdf/2009.01325.pdf) with 33,207 prompts and ~10 times more training samples using different combination of the ranked outputs.
– We then sample more human prompts and they are used to fine-tuned the supervised fine-tuned model with Proximal Policy Optimization algorithm (PPO) (https://arxiv.org/pdf/1707.06347.pdf). The prompt is fed to the PPO model, the Reward model generates a reward value, and the PPO model is iteratively fine-tuned using the rewards and the prompts using 31,144 prompts data.
This process is fully described in here: https://arxiv.org/pdf/2203.02155.pdf. The paper actually details a model called InstructGPT which is described by OpenAI as a “sibling model”, so the numbers shown above are likely to be somewhat different.
Follow me for more Machine Learning content!
#machinelearning #datascience #ChatGPT
People have already started building awesome apps on top of #ChatGPT: 10 use cases
1. Connect your ChatGPT with your Whatsapp.
Link: https://github.com/danielgross/whatsapp-gpt
2. ChatGPT Writer : It use ChatGPT to generate emails or replies based on your prompt!
Link: https://chrome.google.com/webstore/detail/chatgpt-writer-email-writ/pdnenlnelpdomajfejgapbdpmjkfpjkp/related
3. WebChatGPT: WebChatGPT (https://chrome.google.com/webstore/detail/webchatgpt/lpfemeioodjbpieminkklglpmhlngfcn) gives you relevant results from the web!
4. YouTube Summary with ChatGPT: It generate text summaries of any YouTube video!
Link: https://chrome.google.com/webstore/detail/youtube-summary-with-chat/nmmicjeknamkfloonkhhcjmomieiodli/related
5. TweetGPT: It uses ChatGPT to write your tweets, reply, comment, etc.
Link: https://github.com/yaroslav-n/tweetGPT
6. Search GPT: It display the ChatGPT response alongside Google Search results
Link: https://github.com/wong2/chat-gpt-google-extension
7. ChatGPT or all search engines: You can now view ChatGPT responses on Google and Bing!
Link: https://chrome.google.com/webstore/detail/chatgpt-for-search-engine/feeonheemodpkdckaljcjogdncpiiban?ref=producthunt
8. Save all your Prompts?: The `ChatGPT History` extension has you covered!
Link: https://chrome.google.com/webstore/detail/chatgpt-prompt-genius/jjdnakkfjnnbbckhifcfchagnpofjffo
9. Remake a video: Just pick a video you liked and visit https://lnkd.in/e_GD2reT to get its transcript. Once done, bring that back to Chat GPT and tell it to summarize the transcript. Read the summary and make a video on that yourself.
10. Search what people are Prompting with FlowGPT
Link: https://flowgpt.com
#searchengines #people #building #google #comment #video #writer
What solutions have been proposed to improve the accuracy of AI generated questions and answers?
There are a number of approaches that have been proposed to improve the accuracy of artificial intelligence (AI) generated questions and answers. Here are a few examples:
- Data quality: One important factor in the accuracy of AI generated questions and answers is the quality of the data used to train the AI system. Ensuring that the data is diverse, relevant, and accurately labeled can help to improve the accuracy of the AI system.
- Training methods: Different training methods can also impact the accuracy of AI generated questions and answers. For example, using more advanced techniques such as transfer learning or fine-tuning can help to improve the performance of the AI system.
- Human oversight: Another approach that has been proposed to improve the accuracy of AI generated questions and answers is to include some level of human oversight or review. For example, the AI system could be designed to flag potentially problematic or inaccurate questions and answers for further review by a human expert.
- Explainable AI: Another approach that has been proposed is to develop AI systems that are more transparent and explainable, so that it is easier to understand how the AI system arrived at a particular answer or decision. This can help to improve the trustworthiness and accountability of the AI system.
Overall, there is ongoing research and development in this area, and it is likely that a combination of these and other approaches will be needed to improve the accuracy of AI generated questions and answers.
ChatGPT for CyberSecurity
The concept behind ChatGPT
ChatGPT is a chatbot designed to understand and generate human-like language through the use of natural language processing (NLP) and machine learning techniques. It is based on the GPT (Generative Pre-training Transformer) language model developed by OpenAI, which has been trained on a large dataset of human language in order to better understand how humans communicate.
One of the key concepts behind ChatGPT is the idea of language generation. This refers to the ability of the chatbot to produce coherent and coherently structured responses to user input. To do this, ChatGPT uses a number of different techniques, including natural language generation algorithms, machine learning models, and artificial neural networks. These techniques allow ChatGPT to understand the context and meaning of user input, and generate appropriate responses based on that understanding.
Another important concept behind ChatGPT is the idea of natural language processing (NLP). This refers to the ability of the chatbot to understand and interpret human language, and respond to user input in a way that is natural and easy for humans to understand. NLP is a complex field that involves a number of different techniques and algorithms, including syntactic analysis, semantic analysis, and discourse analysis. By using these techniques, ChatGPT is able to understand the meaning of user input and generate appropriate responses based on that understanding.
Finally, ChatGPT is based on the concept of machine learning, which refers to the ability of computers to learn and adapt to new data and situations. Through the use of machine learning algorithms and models, ChatGPT is able to continually improve its understanding of human language and communication, and generate more human-like responses over time.
Google unveils its ChatGPT rival
Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.
Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday February 06th, 2023, with plans to make it available to the public “in the coming weeks.”
Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts.
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” Pichai wrote. “It draws on information from the web to provide fresh, high-quality responses.”
The announcement comes as Google’s core product – online search – is widely thought to be facing its most significant risk in years. In the two months since it launched to the public, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.
The immense attention on ChatGPT has reportedly prompted Google’s management to declare a “code red” situation for its search business. In a tweet last year, Paul Buchheit, one of the creators of Gmail, forewarned that Google “may be only a year or two away from total disruption” due to the rise of AI.
Microsoft, which has confirmed plans to invest billions OpenAI, has already said it would incorporate the tool into some of its products – and it is rumored to be planning to integrate it into its search engine, Bing. Microsoft on Tuesday is set to hold a news event at its Washington headquarters, the topic of which has yet to be announced. Microsoft publicly announced the event shortly after Google’s AI news dropped on Monday.
The underlying technology that supports Bard has been around for some time, though not widely available to the public. Google unveiled its Language Model for Dialogue Applications (or LaMDA) some two years ago, and said Monday that this technology will power Bard. LaMDA made headlines late last year when a former Google engineer claimed the chatbot was “sentient.” His claims were widely criticized in the AI community.
In the post Monday, Google offered the example of a user asking Bard to explain new discoveries made by NASA’s James Webb Space Telescope in a way that a 9-year-old might find interesting. Bard responds with conversational bullet-points. The first one reads: “In 2023, The JWST spotted a number of galaxies nicknamed ‘green peas.’ They were given this name because they are small, round, and green, like peas.”
Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge, according to the post from Google.
Pichai also said Monday that AI-powered tools will soon begin rolling out on Google’s flagship Search tool.
“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” Pichai wrote, “whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.”
If Google does move more in the direction of incorporating an AI chatbot tool into search, it could come with some risks. Because these tools are trained on data online, experts have noted they have the potential to perpetuate biases and spread misinformation.
“It’s critical,” Pichai wrote in his post, “that we bring experiences rooted in these models to the world in a bold and responsible way.”
Read more at https://www.cnn.com/2023/02/06/tech/google-bard-chatgpt-rival
ChatGPT-4
- OpenAI down Gemini 2.0 is upby /u/curlyssa (Artificial Intelligence Gateway) on December 13, 2024 at 1:26 pm
Open Ai was down n Gemini 2.0 came out the same day. This released agentic AI! It can thing on steps and operate on your behalf. Thoughts? submitted by /u/curlyssa [link] [comments]
- Need Help Optimizing Stable Diffusion Workflow for Faster Frame Generationby /u/Otherwise_Builder235 (Artificial Intelligence Gateway) on December 13, 2024 at 12:14 pm
Hi everyone! I’m working on a project that involves generating a series of AI-generated frames using Stable Diffusion to create smooth and consistent animations. My workflow requires: Consistent art style across frames (using LoRA fine-tuning). Consistent key elements like characters or objects (using DreamBooth). Smooth transitions between frames (using techniques like Flux). Currently, I’m experiencing a major bottleneck—each frame takes ~3 minutes to render on my setup, and creating enough frames for even a short animation is incredibly time-consuming. At this rate, generating a one-minute video could take over 24 hours! I’m already exploring AWS g4 instances (Tesla T4 GPUs) to speed up rendering, but I’d like to know if anyone has tips or experience with: Optimized Stable Diffusion models or alternative lightweight architectures. Model optimization techniques like quantization or pruning. Pipeline optimizations or hardware setups that balance cost and performance. Efficient techniques for temporal consistency or frame interpolation. I’m open to any advice, whether it’s about specific tools, model configurations, or infrastructure setups. Thanks in advance for any help you can offer! submitted by /u/Otherwise_Builder235 [link] [comments]
- Outputs of the generative AI are already starting to infect various publishing channelsby /u/True-Telephone-5070 (Artificial Intelligence Gateway) on December 13, 2024 at 11:59 am
https://preview.redd.it/swnyxsw6ul6e1.png?width=800&format=png&auto=webp&s=dc1c4589761aed7f35ebbec550552fcc8c024302 The ease, speed and affordability of using generative AI means that large masses are able to quickly produce a large amount of low-quality AI material, which pollutes various publishing channels. A skilled, thoughtful and responsible user can produce good material with AI, but the low-quality mass will overshadow it and everything else. submitted by /u/True-Telephone-5070 [link] [comments]
- fun AI mobile gamesby /u/Live-Arrival5610 (Artificial Intelligence Gateway) on December 13, 2024 at 11:30 am
anyone know any cool new mobile games with AI integrated? i know they’ve made those odd chat bots but i’m talking more about proper games that you play. Can they even do that yet or are chat bots like Chat GPT the furthest we can go right now? submitted by /u/Live-Arrival5610 [link] [comments]
- Going from separate AI-assisted tasks to AI solution?by /u/Otterly_wonderful_ (Artificial Intelligence Gateway) on December 13, 2024 at 11:02 am
How can I, a generally tech literate but non-AI specialist, make custom AI solutions that “glue together” pieces of capability I have access to separately? I feel frustrated because I’m certain what I want to do is possible but I don’t know how it’s possible to me. Example: I want to capture details of a conversation between experts into a specification document in a standard layout Was: Manually taking notes into a word template Now: Record the Teams meeting, take the AI transcript, feed it into CoPilot with a prompt on the headings I want it placed into Next: I’d love to invite an AI meeting attendee to the Teams meeting which will create the doc in the correct Teams folder afterwards. Surely this is a thing someone has already done? submitted by /u/Otterly_wonderful_ [link] [comments]
- AI for designing housesby /u/Burntout_designer (Artificial Intelligence Gateway) on December 13, 2024 at 10:58 am
Recently, I got the opportunity to try out an underrated AI tool, which you might not even find in the first few pages of google, myself from a background of design, I'm always interested in trying out new AI tools for design in fields like Graphic, web, interior, architectural, industrial design. This tool allows me to upload a sketch or an Unrendered model into a neat, realistic and pretty renders, in just few seconds of generating. I think about how this tool or AI can be more normalized in the architectural design field, don't get me wrong it can't replace anyone at this moment, but surely it has place in a workflow, can't remember how many times clients want many variations of styles, that would take more than a day to make all of those variations, just to trash most of them later after picking one or two. So I can see how it belongs. The developers of the tool are very friendly people and I'm very glad to be acquainted with them. Here is the no-nonesense direct link to the tool per the rule: https://neolocus.ai submitted by /u/Burntout_designer [link] [comments]
- Top 9 AI Music Generators in 2024by /u/djquimoso (Artificial Intelligence Gateway) on December 13, 2024 at 10:38 am
Top 9 AI Music Generators in 2024 (creator of the podcast) https://www.patreon.com/posts/top-9-ai-music-117881912?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_link submitted by /u/djquimoso [link] [comments]
- [D] Training with synthetic data and model collapse. Is there progress?by /u/BubblyOption7980 (Machine Learning) on December 13, 2024 at 10:03 am
About a year ago, research papers talked about model collapse when dealing with synthetic data. Recently I’ve been hearing about some progress in this regard. I am not expert and would welcome your views on what’s going on. Thank you and have a fantastic day. submitted by /u/BubblyOption7980 [link] [comments]
- [D] Agentic AI Design Patternsby /u/Mindless_Copy_7487 (Machine Learning) on December 13, 2024 at 9:49 am
I was looking into design patterns for Agentic AI and I could need some help to grasp the concepts. I read about ReAct and ReWOO. From ReWOO, I really liked the idea of having a planner that creates a blueprint of the work that needs to be done. I can imagine that this works well for a lot of tasks, and it optimizes token usage compared to ReAct. From ReAct, I like that it has a reflection/observation LLM, to decide whether the output is good enough or needs another pass through the agents. What I don't understand: Why does ReWOO not have a reflection component?? Wouldn't it be the best of both worlds to have the planner and the reflection? This was the first draft for my agentic AI prototype, and I think it has pretty obvious advantages. I think I am missing something here. submitted by /u/Mindless_Copy_7487 [link] [comments]
- Claude and Perplexity - going to lift your game?by /u/AppropriateRespect91 (Artificial Intelligence Gateway) on December 13, 2024 at 9:33 am
We’ve seen updates from Open AI, Gemini, Grok over past few days. Are the other two players going to do anything? submitted by /u/AppropriateRespect91 [link] [comments]
- Have you found any GPTs that can analyze a whole site and output information based upon that text?by /u/PuttPutt7 (Artificial Intelligence Gateway) on December 13, 2024 at 8:27 am
For instance, I'm trying to figure out how to setup my own locally run AI featuring ollama using localai.io. However, I'm not a developer and am struggling through EVERY STEP because i've never used 80% of the backend programs and such they require. There's effectively no other documentation but I've been using Gemini and chatgpt to help answer quetsions, the only problem is they really only analyze the individual page you give them. Are there any wrappers that can look at a whole section (i.e. documentation section on this site which is like 20 pages) to then be an expert in assisting me setup everything? submitted by /u/PuttPutt7 [link] [comments]
- [D] Importance of HPO per field / model type / applicationsby /u/Maleficent_Ad5541 (Machine Learning) on December 13, 2024 at 6:58 am
I’ve noticed that the time spent on hyperparameter optimization vary significantly, not just between industry and academia but also across different fields like NLP, computer vision, or reinforcement learning. I’m curious—what’s your experience? Is tuning something you prioritize heavily, or do you often settle for “good enough” configurations to move faster? What field / model type / applications do you think experience most(or least) bottleneck in workflow due to HPO? Are there any industry dependency around choosing HPO tools? For example, everyone in xx industry would pick Optuna as a go-to or everyone running xx experiments would use Sigopt. Would love to hear your experiences! Thanks submitted by /u/Maleficent_Ad5541 [link] [comments]
- One-Minute Daily AI News 12/12/2024by /u/Excellent-Target-847 (Artificial Intelligence Gateway) on December 13, 2024 at 5:43 am
Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits.[1] Meta releases AI model to enhance metaverse experience.[2] Microsoft debuts Phi-4, a new generative AI model, in research preview.[3] Google built an AI tool that can do research for you.[4] Sources included at: https://bushaicave.com/2024/12/12/12-12-2024/ submitted by /u/Excellent-Target-847 [link] [comments]
- [D] help with evaluating modelby /u/Affectionate_Pen6368 (Machine Learning) on December 13, 2024 at 5:40 am
i am having an issue with evaluating my model because model.evaluate() returns an okay overall score in accuracy but the confusion matrix and classification report return 100% for one class and 0% for another, i am using cifar10 but only 2 classes from it. anyone know why this happens? is this overfitting i am not sure because i am getting a similar score as model.evaluate(0 in my training accuracy and same for loss (which is almost as high as the accuracy) submitted by /u/Affectionate_Pen6368 [link] [comments]
- Can HIX AI improve technical writing?by /u/glutenbag (Artificial Intelligence Gateway) on December 13, 2024 at 5:31 am
Hey, I’ve been using AI to write user guides, but the output often ends up too vague or hard to follow. I saw HIX AI, which claims to help make AI writing clearer as well as make it detection proof. Has anyone tried making instructional content with it? Did it really improve readability and clarity? What were the detection rates on certain services? Thanks in advance! submitted by /u/glutenbag [link] [comments]
- Energy Efficiency Gains Through AIby /u/Diligent_Relative_36 (Artificial Intelligence Gateway) on December 13, 2024 at 5:22 am
AI-powered energy management systems offer substantial improvements in energy efficiency across various sectors. Smart grids, leveraging AI algorithms to analyze real-time data from numerous sources (weather patterns, energy consumption levels, grid stability indicators), dynamically adjust energy distribution to minimize waste and optimize resource allocation. This granular control leads to reduced operational costs for utilities, decreased reliance on peak power generation (often from less efficient sources), and ultimately a smaller carbon footprint. https://sustainability-directory.com/question/what-are-the-long-term-implications-of-ai-powered-energy-management-for-the-energy-sector-and-society/ Source: https://remoteupskill.com submitted by /u/Diligent_Relative_36 [link] [comments]
- Recommendations for Faster Models or Techniques for Video Generation with Stable Diffusionby /u/Otherwise_Builder235 (Artificial Intelligence Gateway) on December 13, 2024 at 4:45 am
Hi everyone! I’m building a video generation pipeline that involves creating videos using AI-generated images and keyframes. My workflow relies on Stable Diffusion fine-tuning techniques like LoRA, DreamBooth, and Flux to ensure consistent style, elements, and smooth transitions between frames. LoRA for art style consistent. DreamBooth for Elements consistent. Flux for keyframe transition consistent. Was thinking of fine-tuning these three models and merging them to ensure consistent style, elements, and smooth transitions between frames. Currently, each image/keyframe takes about 3 minutes to render on my setup, which is causing significant bottlenecks. Since the pipeline requires multiple frames for a single video, I want to optimize and reduce the rendering time. Even rendering a one-minute video is estimated to take at least 24 hours. I can't effort a faster GPU at the moment currently thinking to use AWS G4. Are there faster or more efficient models/techniques I could use to achieve this? For example: Optimized versions of LoRA/DreamBooth/Flux or alternative approaches? Lightweight architectures or model pruning techniques? Hardware configurations that balance cost and performance? If anyone has experience with generating videos using AI-generated frames or similar pipelines, I’d love to hear your tips and recommendations. Thanks in advance for your help! submitted by /u/Otherwise_Builder235 [link] [comments]
- “The Madness of the Race to Build Artificial General Intelligence” Thoughts on this article? I’ll drop some snippets belowby /u/jvstnmh (Artificial Intelligence Gateway) on December 13, 2024 at 4:16 am
https://www.truthdig.com/articles/the-madness-of-the-race-to-build-artificial-general-intelligence/ What exactly are AI companies saying about the potential dangers of AGI? During a 2023 talk, OpenAI CEO Sam Altman was asked about whether AGI could destroy humanity, and he responded, “the bad case — and I think this is important to say — is, like, lights out for all of us.” In some earlier interviews, he declared that “I think AI will…most likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning,” and “probably AI will kill us all, but until then we’re going to turn out a lot of great students.” The audience laughed at this. But was he joking? If he was, he was also serious: the OpenAI website itself states in a 2023 article that the risks of AGI may be “existential,” meaning — roughly — that they could wipe out the entire human species. Another article on their website affirms that “a misaligned superintelligent AGI could cause grievous harm to the world.” In a 2015 post on his personal blog, Altman wrote that “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.” Whereas “AGI” refers to any artificial system that is at least as competent as humans in every cognitive domain of importance, such as science, mathematics, social manipulation and creativity, a “SMI” is a type of AGI that is superhuman in its capabilities. Many researchers in the field of “AI safety” believe that once we have AGI, we will have superintelligent machines very shortly after. The reason is that designing increasingly capable machines is an intellectual task, so the “smarter” these systems become, the better able they’ll become at designing even “smarter” systems. Hence, the first AGIs will design the next generation of even “smarter” AGIs, until those systems reach “superhuman” levels. Again, one doesn’t need to accept this line of reasoning to be alarmed when the CEO of the most powerful AI company that’s trying to build AGI says that superintelligent machines might kill us. Just the other day, an employee at OpenAI who goes by “roon” on Twitter/X, tweeted that “things are accelerating. Pretty much nothing needs to change course to achieve AGI … Worrying about timelines” — that is, worrying about whether AGI will be built later this year or 10 years from now — “is idle anxiety, outside your control. You should be anxious about stupid mortal things instead. Do your parents hate you? Does your wife love you?” In other words, AGI is right around the corner and its development cannot be stopped. Once created, it will bring about the end of the world as we know it, perhaps by killing everyone on the planet. Hence, you should be thinking not so much about when exactly this might happen, but on more mundane things that are meaningful to us humans: Do we have our lives in order? Are we on good terms with our friends, family and partners? When you’re flying on a plane and it begins to nosedive toward the ground, most people turn to their partner and say “I love you” or try to send a few last text messages to loved ones to say goodbye. That is, according to someone at OpenAI, what we should be doing right now. submitted by /u/jvstnmh [link] [comments]
- Why is A.I mostly evil in movies?by /u/Burlingtonfilms (Artificial Intelligence Gateway) on December 13, 2024 at 1:11 am
And if it's a warning, why are we not listening? Is this the "fun" period before a company eventually goes too far? submitted by /u/Burlingtonfilms [link] [comments]
- Sketch to render APIby /u/Responsible_Routine6 (Artificial Intelligence Gateway) on December 12, 2024 at 10:48 pm
Hello I was searching for sketch to render api like the one offered by stabilityai, but cheaper. Do someone know some other alternatives? Thanks submitted by /u/Responsible_Routine6 [link] [comments]
- How can we ensure AI is used for good, not evil?by /u/Imagine-your-success (Artificial Intelligence Gateway) on December 12, 2024 at 9:54 pm
What safeguards and regulations are necessary to prevent AI from being used for malicious purposes, such as cyberattacks or autonomous weapons? submitted by /u/Imagine-your-success [link] [comments]
- [D] LSTM model implementation and approximation questionsby /u/Sea_Onion41 (Machine Learning) on December 12, 2024 at 9:09 pm
For a project I am currently trying to integrate an Autoencoder for feature extraction and an LSTM for classification of the reduced feature space. The problem I am encountering is on how to train the LSTM network. The AE produces 5 datapoints which is fed into the LSTM network. The trick now comes in on the training of the LSTM network and how the LSTM works. I want the LSTM to take into account the 5 parameters from the AE at time t as well as the parameters at t-1 and t-2. As far as I understand the LSTM does this automatically, or should it then be that the LSTM takes in a total of 15 parameters with each pair of 5 corresponding to one timestep of the AE? Any advice on LSTM would be great or how such training can be done in an efficient way. The AE is processing a time-series signal. submitted by /u/Sea_Onion41 [link] [comments]
- Why is my experience with Artificial Intelligence so different then "the hype".by /u/Implement1982 (Artificial Intelligence Gateway) on December 12, 2024 at 8:43 pm
I have spent a fair amount of time using all of the available AI models. If you go on Youtube and listen to the hype you would think they are on the brink of AGI. However my experience with AI has been completely different. The AI I use can't pass high school math courses, nevermind college level tests. If I prompt any of the available AI models with questions; they are wrong over 75% of the time. I am sourcing questions from textbooks, and only changing the names and variables so that they actually have to solve them and not use memorization. I challenge anyone here to show me ChatGPT for example solving complicated problems in math, chemistry, or programming. My experience with AI programming is even worse then my chemistry and math quiz's. They seemingly understand the question but their implementation does not follow details and is an "hallucination" more often then not. Im just going to leave a very very very very simple math problem. John deposited $10,500 into a 3-year certificate of deposit that earned 4.75% annual interest, compounded annually. John made no additional deposits to or withdrawals from the certificate of deposit. What was the value of the certificate of deposit at the end of the 3-year period? Can you show me your work. CHATGPT REPLY To find the value of Jonathan’s certificate of deposit at the end of 3 years, we use the formula for compound interest: Where: • = the future value of the investment • = the initial deposit (principal) • = the annual interest rate (in decimal form) • = the number of years Step 1: Identify the given values • • • years Step 2: Substitute the values into the formula Simplify: Step 3: Calculate Step 4: Multiply by the principal Final Answer: The value of the certificate of deposit at the end of 3 years is $12,063.27. No ChatGPT that is the wrong answer. Even if you account for rounding error's, it is close but wrong. Close does not equal right. When you tell ChatGPT to redo the math, and I gave it the correct answer , it will only then give the correct answer of 12.068.45 +/- .03 rounding differences I can repeat this over and and over and over with math with chemistry. ChatGPT does not even have a high school accuracy, nevermind college level. It can provide a correct formula but cannot actually solve the formula. Time and time again. WHat gives? I have not seen anyone actually challenge any of the AI claims. Every post reads like a testimonial without any of the technical details backing up their claims. submitted by /u/Implement1982 [link] [comments]
- [D] "Proper" way to upload accepted conference paper to the ArXiv?by /u/baghalipolo (Machine Learning) on December 12, 2024 at 8:38 pm
We recently had a paper accepted to a conference (AAAI). We found out that the conference does not publish appendices so they recommend we upload the full paper (with appendix) to arXiv. This is something we were considering doing anyway since the paper would be available before the conference proceedings come out. My concern is that if someone decides to cite our work, they may either become confused or cite the arXiv rather than AAAI "version". Is there a "correct" or common way to handle this? Do arXiv uploads with the same title get indexed to "one manuscript" on google scholar? Also, are we allowed to use the conference template to upload? (This part might be conference dependent I suppose). I know it is common these days to upload to arXiv before hearing back from a conference (usually with a different title) but I think this is a slightly different situation as the paper is accepted and the uploaded version will be identical to the conference paper (though with an Appendix). Thanks in advance! submitted by /u/baghalipolo [link] [comments]
- Is Gemini Advanced actually "Advanced" thanks to yesterday's updates?by /u/TheLawIsSacred (Artificial Intelligence Gateway) on December 12, 2024 at 8:34 pm
I have not yet toyed around significantly with it, but was not initially impressed, for instance, the Deep Research feature seems to mimic the user interface of pmPerplexity, but the ultimate work product is nowhere near the same quality as Perplexity.....that was just with one or two tests though... submitted by /u/TheLawIsSacred [link] [comments]
- [P] Scalling data from aggregated calculationsby /u/Wikar (Machine Learning) on December 12, 2024 at 8:32 pm
Hello, I have a project in which I detect anomalies on transactions data from ethereum blockchain. I have performed aggregated calculations on each wallet address (ex. minimum, maximum, median, sum, mode of transactions' values) and created seperated datafile with it. I have joined the data on all the transactions. Now I have to standardize data (I have chosen robust scalling) before machine learning but I have following questions regarding this topic: Should I actually standardize each feature based on its unique mean and iqr? Or perform scalling on the column that the calculations come from - value column and than use its mean and iqr to scale the calculated columns? If each feature was scaled based on its own mean and iqr should I do it before joining calculated data or after? submitted by /u/Wikar [link] [comments]
- From Viruses and Materials to Galaxies and Beyond: The Role Machine Learning Plays in Scientific Discoveryby /u/SlothSpeedRunning (Machine Learning) on December 12, 2024 at 8:21 pm
submitted by /u/SlothSpeedRunning [link] [comments]
- AI Anxietyby /u/OldManSysAdmin (Artificial Intelligence Gateway) on December 12, 2024 at 8:05 pm
There's an undercurrent of emotion around the world right now about AI. Every day young people post things like, "Should I even bother finishing my data science degree?", because they feel like AI will take care of that before they graduate. I call this AInxiety. What do you call it? It's a true problem. People of all ages are anxious about how they'll earn a living as more things become automated via AI. submitted by /u/OldManSysAdmin [link] [comments]
- [D] The winner of the NeurIPS 2024 Best Paper Award sabotaged the other teamsby /u/LelouchZer12 (Machine Learning) on December 12, 2024 at 7:41 pm
Presumably, the winner of the NeurIPS 2024 Best Paper Award (a guy from ByteDance, the creators of Tiktok) sabotaged the other teams to derail their research and redirect their resources to his own. Plus he was at meetings debugging his colleagues' code, so he was always one step ahead. There's a call to withdraw his paper. https://var-integrity-report.github.io/ I have not checked the facts themselves, so if you can verify what is asserted and if this is true this would be nice to confirm. submitted by /u/LelouchZer12 [link] [comments]
- [D] does intel gpu support ROCm or AMD cards support intel one?by /u/mrnothing- (Machine Learning) on December 12, 2024 at 7:39 pm
i can't find this information and if both are open source it make sense a compatibility layer , any of the two is already ported to the other platform?, if you can share info about nvidia too will be cool submitted by /u/mrnothing- [link] [comments]
References:
1- https://vikaskulhari.medium.com/chatgpt-end-of-google-f6a958f38ac2
2- https://en.wikipedia.org/wiki/Meena
3- https://en.wikipedia.org/wiki/ChatGPT
4- https://ai.googleblog.com/2020/01/towards-conversational-agent-that-can.html
5- https://www.reddit.com/r/ChatGPT/
8- https://enoumen.com/2023/02/11/artificial-intelligence-frequently-asked-questions/
Active Hydrating Toner, Anti-Aging Replenishing Advanced Face Moisturizer, with Vitamins A, C, E & Natural Botanicals to Promote Skin Balance & Collagen Production, 6.7 Fl Oz
Age Defying 0.3% Retinol Serum, Anti-Aging Dark Spot Remover for Face, Fine Lines & Wrinkle Pore Minimizer, with Vitamin E & Natural Botanicals
Firming Moisturizer, Advanced Hydrating Facial Replenishing Cream, with Hyaluronic Acid, Resveratrol & Natural Botanicals to Restore Skin's Strength, Radiance, and Resilience, 1.75 Oz
Skin Stem Cell Serum
Smartphone 101 - Pick a smartphone for me - android or iOS - Apple iPhone or Samsung Galaxy or Huawei or Xaomi or Google Pixel
Can AI Really Predict Lottery Results? We Asked an Expert.
Djamgatech
Read Photos and PDFs Aloud for me iOS
Read Photos and PDFs Aloud for me android
Read Photos and PDFs Aloud For me Windows 10/11
Read Photos and PDFs Aloud For Amazon
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more)
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6(Email us for more)
FREE 10000+ Quiz Trivia and and Brain Teasers for All Topics including Cloud Computing, General Knowledge, History, Television, Music, Art, Science, Movies, Films, US History, Soccer Football, World Cup, Data Science, Machine Learning, Geography, etc....
List of Freely available programming books - What is the single most influential book every Programmers should read
- Bjarne Stroustrup - The C++ Programming Language
- Brian W. Kernighan, Rob Pike - The Practice of Programming
- Donald Knuth - The Art of Computer Programming
- Ellen Ullman - Close to the Machine
- Ellis Horowitz - Fundamentals of Computer Algorithms
- Eric Raymond - The Art of Unix Programming
- Gerald M. Weinberg - The Psychology of Computer Programming
- James Gosling - The Java Programming Language
- Joel Spolsky - The Best Software Writing I
- Keith Curtis - After the Software Wars
- Richard M. Stallman - Free Software, Free Society
- Richard P. Gabriel - Patterns of Software
- Richard P. Gabriel - Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You're Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why's (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don't Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript - The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems - A Programmer's Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face - The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex's Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks
Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Health Health, a science-based community to discuss human health
- Opinion | UnitedHealth Group C.E.O.: The Health Care System Is Flawed. Let’s Fix It. (Gift Article)by /u/nytopinion on December 13, 2024 at 10:08 am
submitted by /u/nytopinion [link] [comments]
- Oura Ring Users -> GET PAID for Your Sleep-Data NOW! //beta-testby /u/demirb on December 13, 2024 at 9:52 am
submitted by /u/demirb [link] [comments]
- Latest symptoms of COVID-19: New strain presents like the common coldby /u/newsweek on December 13, 2024 at 9:12 am
submitted by /u/newsweek [link] [comments]
- Hate people fidgeting? Join the misokinesia clubby /u/EnazS on December 13, 2024 at 8:37 am
submitted by /u/EnazS [link] [comments]
- How Much Formaldehyde Is in Your Car, Your Kitchen or Your Furniture? Here’s What Our Testing Found.by /u/leoyvr on December 13, 2024 at 2:18 am
submitted by /u/leoyvr [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
- TIL Boston Latin School, founded in 1635, was the first U.S. public school. Although it has changed locations several times, it remains in operation today. Famous alumni include John Hancock, Samuel Adams, Benjamin Franklin, Ralph Waldo Emerson, Cotton Mather and Joseph Kennedy.by /u/SnarkySheep on December 13, 2024 at 7:47 am
submitted by /u/SnarkySheep [link] [comments]
- TIL the Cuyahoga River was so badly polluted by companies spilling oil on it that the river repeatedly burned, the last time was in 1969, sparking an environmentalist movement to clean up the river.by /u/zahrul3 on December 13, 2024 at 6:40 am
submitted by /u/zahrul3 [link] [comments]
- TIL that the British Board of Film Classification, who give out ratings for films released in the UK, used to have an "H" rating, meaning "Horriffic", from 1932-1951. This rating was meant to be used for horror films released in the country.by /u/Imrustyokay on December 13, 2024 at 4:21 am
submitted by /u/Imrustyokay [link] [comments]
- TIL that stray dogs in Chernobyl have managed to survive for 40 years in a radioactive environment due to genetic adaptations that help them cope with the radiation.by /u/Least_Can_9286 on December 13, 2024 at 3:44 am
submitted by /u/Least_Can_9286 [link] [comments]
- TIL The Italian dish 'Spaghetti all'assassina' was named because patrons joked it was so spicy the chef was trying to kill them. The Accademia dell'Assassina, a group of culinary experts and enthusiasts, was founded in Bari in 2013 to protect against any corruption of the original recipe.by /u/Bluest_waters on December 13, 2024 at 3:27 am
submitted by /u/Bluest_waters [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
- Researchers found biochar application enhances soil quality by improving soil physical structure under particular water and salt conditions in arid region of Northwest Chinaby /u/JIntegrAgri on December 13, 2024 at 8:08 am
submitted by /u/JIntegrAgri [link] [comments]
- Researchers established a rapid method for detecting flavonoids, facilitating a deeper understanding of the dynamics and gene functions within the phenylpropane metabolic pathwayby /u/JIntegrAgri on December 13, 2024 at 8:03 am
submitted by /u/JIntegrAgri [link] [comments]
- A new study finds overfishing has halved shark and ray populations since 1970by /u/calliope_kekule on December 13, 2024 at 6:02 am
submitted by /u/calliope_kekule [link] [comments]
- For the first time, researchers detect pre-malignant pancreatic lesions with magnetic resonance imagingby /u/calliope_kekule on December 13, 2024 at 5:58 am
submitted by /u/calliope_kekule [link] [comments]
- Autoimmune-Like Mechanism in Heart Failure Enables Preventive Vaccine Therapyby /u/soraticat on December 13, 2024 at 5:38 am
submitted by /u/soraticat [link] [comments]
Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.
- Rams outlast 49ers on TNF to improve playoff hopesby /u/Oldtimer_2 on December 13, 2024 at 4:18 am
submitted by /u/Oldtimer_2 [link] [comments]
- [Highlight] Kevin Dotson runs through two defenders blocking for Puka Nacuaby /u/nfl on December 13, 2024 at 3:49 am
submitted by /u/nfl [link] [comments]
- 'We Are Onto Chapel Hill': Bill Belichick's Girlfriend Cheers on the 6-Time Super Bowl Champ's Return to Coachingby /u/angela4512 on December 13, 2024 at 12:41 am
submitted by /u/angela4512 [link] [comments]
- LeBron James ruled out of Lakers' game at Minnesota on Friday with foot sorenessby /u/Oldtimer_2 on December 12, 2024 at 11:44 pm
submitted by /u/Oldtimer_2 [link] [comments]
- Raiders player Charles Snowden facing misdemeanor DUI charge after Las Vegas arrestby /u/Oldtimer_2 on December 12, 2024 at 10:02 pm
submitted by /u/Oldtimer_2 [link] [comments]
In contrast, real experts know when to sound confident, and when to let others know they’re at the boundaries of their knowledge. Experts know, and can describe, the boundaries of what they know.
Building large language models that can accurately decide when to be confident and when not to will reduce their risk of misinformation and build trust.
Go deeper in The Batch: https://www.deeplearning.ai/the-batch/issue-174/