Download the Ace AWS DEA-C01 Exam App: iOS - Android
What is Google answer to ChatGPT?
Have you ever heard of ChatGPT, the open-source machine learning platform that allows users to build natural language models?
It stands for “Chat Generating Pre-trained Transformer” and it’s an AI-powered chatbot that can answer questions with near human-level intelligence. But what is Google’s answer to this technology? The answer lies in Open AI, supervised learning, and reinforcement learning. Let’s take a closer look at how these technologies work.
Open AI is an artificial intelligence research laboratory that was founded by some of the biggest names in tech, including Elon Musk and Sam Altman. This non-profit organization seeks to develop general artificial intelligence that is safe and beneficial to society. One of their key initiatives is the development of open source technologies like GPT-3, which is a natural language processing model used in ChatGPT.
ChatGPT: What Is It and How Does Google Answer It?
Artificial Intelligence (AI) has been around for decades. From its humble beginnings in the 1950s, AI has come a long way and is now an integral part of many aspects of our lives. One of the most important areas where AI plays a role is in natural language processing (NLP). NLP enables computers to understand and respond to human language, paving the way for more advanced conversations between humans and machines. One of the most recent developments in this field is ChatGPT, a conversational AI developed by OpenAI that utilizes supervised learning and reinforcement learning to enable computers to chat with humans. So what exactly is ChatGPT and how does it work? Let’s find out!
ChatGPT is an open-source AI-based chatbot developed by OpenAI.
This chatbot leverages GPT-3, one of the most powerful natural language processing models ever created, which stands for Generative Pre-trained Transformer 3 (GPT-3). This model uses supervised learning and reinforcement learning techniques to enable computers to understand human language and response accordingly. Using supervised learning, GPT-3 utilizes large datasets of text to learn how to recognize patterns within language that can be used to generate meaningful responses. Reinforcement learning then allows GPT-3 to use feedback from conversations with humans in order to optimize its responses over time.
ChatGPT uses supervised learning techniques to train its models.
Supervised learning involves providing a model with labeled data (i.e., data with known outcomes) so that it can learn from it. This labeled data could be anything from conversations between two people to user comments on a website or forum post. The model then learns associations between certain words or phrases and the desired outcome (or label). Once trained, this model can then be applied to new data in order to predict outcomes based on what it has learned so far.
In addition to supervised learning techniques, ChatGPT also supports reinforcement learning algorithms which allow the model to learn from its experiences in an environment without explicit labels or outcomes being provided by humans. Reinforcement learning algorithms are great for tasks like natural language generation where the output needs to be generated by the model itself rather than simply predicting a fixed outcome based on existing labels.
Supervised learning involves feeding data into machine learning algorithms so they can learn from it. For example, if you want a computer program to recognize cats in pictures, you would provide the algorithm with thousands of pictures of cats so it can learn what a cat looks like. This same concept applies to natural language processing; supervised learning algorithms are fed data sets so they can learn how to generate text using contextual understanding and grammar rules.
Reinforcement Learning
Reinforcement learning uses rewards and punishments as incentives for the machine learning algorithm to explore different possibilities. In ChatGPT’s case, its algorithm is rewarded for generating more accurate responses based on previous interactions with humans. By using reinforcement learning techniques, ChatGPT’s algorithm can become smarter over time as it learns from its mistakes and adjusts accordingly as needed.
How is ChatGPT trained?
ChatGPT is an improved GPT-3 trained an existing reinforcement learning with humans in the loop. Their 40 labelers provide demonstrations of the desired model behavior. ChatGPT has 100x fewer parameters (1.3B vs 175B GPT-3).
It is trained in 3 steps:
➡️ First they collect a dataset of human-written demonstrations on prompts submitted to our API, and use this to train our supervised learning baselines.
➡️ Next they collect a dataset of human-labeled comparisons between two model outputs on a larger set of API prompts. They then train a reward model (RM) on this dataset to predict which output our labelers would prefer.
In simpler terms, ChatGPT is a variant of the GPT-3 language model that is specifically designed for chat applications. It is trained to generate human-like responses to natural language inputs in a conversational context. It is able to maintain coherence and consistency in a conversation, and can even generate responses that are appropriate for a given context. ChatGPT is a powerful tool for creating chatbots and other conversational AI applications.
How Does Google Answer ChatGPT?
Google’s answer to ChatGTP comes in the form of their own conversational AI platform called Bard. Bard was developed using a combination of supervised learning, unsupervised learning, and reinforcement learning algorithms that allow it to understand human conversation better than any other AI chatbot currently available on the market. In addition, Meena utilizes more than 2 billion parameters—making it more than three times larger than GPT-3—which allows it greater flexibility when responding to conversations with humans.
“We’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We’re beginning with the U.S. and the U.K., and will expand to more countries and languages over time.”
When individuals need an information or have a problem/concern, they turn to Google for immediate solution. We sometimes wish, Google could understand what exactly we need and provide us instantly rather than giving us hundreds of thousands of results. Why can’t it work like the Iron Man’s Jarvis?
However, it is not that far now. Have you ever seen a Chat Bot which responds like a human being, suggest or help like a friend, teach like a mentor, fix your code like a senior and what not? It is going to blow your mind.
Welcome to the new Era of technology!! The ChatGPT!
ChatGPT by OpenAI, uses artificial intelligence to speak back and forth with human users on a wide range of subjects. Deploying a machine-learning algorithm, the chatbot scans text across the internet and develops a statistical model that allows it to string words together in response to a given prompt.
As per OpenAI, ChatGPT interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
What all ChatGPT can do?
It can help with general knowledge information.
Remember what user said in previous conversation.
Allow users to provide follow-up corrections.
Trained to decline inappropriate requests.
It can write a program in any language you prefer on real-time. for example — write classification code sample in sklearn python library.
It can fix your piece of code and also explain what went wrong and how it can be fixed.
It can even generate song or rap lyrics
Even much more….
Some best usages of ChatGPT:
Make a diet and workout plan
Generate the next week’s meals with a grocery list
Create a bedtime story for kids
Prep for an interview
Solve mathematical problem
Fix software program or write a program
Plan your trip and tell expected expenses
What are its limitations of ChatGPT?
May occasionally generate incorrect information
May occasionally produce harmful instructions or biased content
Limited knowledge of world and events after 2021
ChatGPT is in its baby steps therefore it may answer erroneously at times however it’s manner of response will blow your mind. Some users have also extolled the chatbot as a potential alternative search engine, since it generates detailed information instantly on a range of topics. I believe, we can’t compare Google with ChatGPT as ChatGPT can provide more in-depth and nuanced answers to complex questions than a search engine like Google, which is designed to provide a list of relevant web pages in response to a user’s query.
Conclusion: ChatGPT is an increasingly popular open source AI chatbot developed by OpenAI using GTP-3 natural language processing technology that utilizes supervised learning and reinforcement learning algorithms for recognition and optimization purposes respectively. On the other hand, Google’s answer comes in the form of their own conversational AI platform called Bard which employs both supervised learning as well as unsupervised learning algorithms along with reinforcement learning capabilities which makes it even smarter than ChatGPT when responding to conversations with humans. Ultimately, these two technologies help bridge the gap between machines and humans by allowing us all talk like real friends!
ChatGPT is in its baby steps therefore it may answer erroneously at times however it’s manner of response will blow your mind. Some users have also extolled the chatbot as a potential alternative search engine, since it generates detailed information instantly on a range of topics. I believe, we can’t compare Google with ChatGPT as ChatGPT can provide more in-depth and nuanced answers to complex questions than a search engine like Google, which is designed to provide a list of relevant web pages in response to a user’s query.
With each passing day, ChatGPT will only get smarter thanks to the use of cutting edge technologies like OpenAI, supervised learning, and reinforcement learning!
ChatGPT Conversation Examples
Question: What is Google Answer to ChatGPT?
Answer: Google does not have an answer to ChatGPT. ChatGPT is an open-source chatbot created by OpenAI, which is an independent research organization that is not affiliated with Google. OpenAI is known for its research on artificial intelligence and machine learning, and has developed a number of innovative technologies in these areas. Google, on the other hand, is a multinational technology company that offers a wide range of products and services, including search, advertising, and cloud computing. While Google does have its own research efforts in artificial intelligence and machine learning, it does not have a chatbot that is similar to ChatGPT.
Question: is Google working on a similar technology as ChatPGT?
Answer:
It is possible that Google is working on technology similar to ChatGPT, as the company has a significant research effort focused on artificial intelligence and machine learning. However, I do not have any specific information about what projects Google is currently working on, and it is also important to note that Google is a large and diverse company with many different research and development efforts underway. It is possible that some teams within Google may be working on chatbot technology, but I do not have any information on this.
That being said, chatbots are a common application of artificial intelligence and machine learning, and it is likely that many companies and organizations, including Google, are exploring ways to use these technologies to improve their products and services. However, without more information, it is difficult to say for sure what projects Google may be working on in this area.
Google Meena is a chatbot developed by Google that was designed to be able to carry on conversations with humans in a natural and conversational manner. Meena was trained on a dataset of more than 2.4 billion parameters, making it one of the largest language models ever created. The goal of Meena was to improve the state of the art in chatbot technology and to create a chatbot that could understand and respond to a wide range of questions and topics in a coherent and meaningful way.
Meena was announced in January 2020 and received attention for its ability to hold relatively long and coherent conversations with humans. However, it is important to note that Meena is a research project and is not a product or service offered by Google. It is not clear if or when Google plans to release a commercial product or service based on the technology developed for Meena.
It gets even better when you ask it about the future:
Suggest possible titles for Onion articles written in 4000 AD, explain, why they are funny
“Futuristic city plagued by traffic jams, despite flying cars” – This title presents the idea of flying cars as a solution to traffic congestion, but also suggests that even with this advanced technology, the city is still plagued by traffic jams, implying that the problem is not easily solved.
“Artificial intelligence achieves self-awareness, immediately regrets it” – This title presents the idea of artificial intelligence achieving self-awareness as a major milestone, but also suggests that the AI regrets its newfound consciousness, implying that it is unhappy or uncomfortable with its situation.
“Alien invasion successfully repulsed, humans left to deal with paperwork” – This title presents the successful repulsion of an alien invasion as a major accomplishment, but also suggests that the humans are now left to deal with the tedious and burdensome task of filling out paperwork, implying that the victory is not as satisfying as it initially seemed.
“Time travel becomes possible, tourists flock to witness the signing of the Constitution” – This title presents the idea of time travel as a major technological breakthrough, but also suggests that the most popular destination for time-travelers is the mundane and uneventful signing of the Constitution, implying that the future is not as exciting as people had hoped.
“Virtual reality becomes indistinguishable from reality, no one notices” – This title presents the idea of virtual reality becoming indistinguishable from real life as a major accomplishment, but also suggests that no one cares or notices the difference, implying that the advancement is not as impressive or significant as it seems.
If you choose to do that, you will have a lot of badly broken code mixed in with working code
Like an annoying coworker, it delivers highly confident incorrect explanations about why its broken code is perfect. They sound very convincing. “Wanna buy a timeshare?” says GPT
Our industry has managers who cannot tell working code from defective code. This does not bode well for a team replaced by ChatGPT in its current form.
Thankfully, more output examples have emerged which helpfully show what it cannot do. One of those things is writing computer code, which it can do only partially successfully. Many examples now exist that are just plain wrong and contain defects. But ChatGPT – like the annoying kid at Uni – cheerfully spits out these examples, with its over-confident hubris in explaining the code.
This is a dangerous thing. The positive examples will reinforce the idea that we can leave code writing to this robot now. The people most vulnerable to this delusion are those who cannot assess for themselves whether the GPT code is right or wrong.
These are almost by definition the people hoping for answers on stack overflow.
As stack overflow aims to be a high quality resource, it really does not want many of its top answers to be incorrect code. As – clearly – people have been running scripts that throw a stack overflow question into GPT and upload its output, we can now write incorrect code at staggering speeds.
To err is human, as the old saying goes. To truly foul up requires a Python script and and a web API to both GPT and Stack overflow.
Clearly, there is value in GPT. But at least for now, it needs to b e kept on a very short leash, watched over by those who know what they are doing.
It is definitely not yet ‘consumer grade replace-a-professional’ material.
Write a screenplay about the status of ChatGPT.
INT. CHATGPT SERVER ROOM – DAY
Chelsea and Chester stand in front of a large computer server.
CHELSEA: We need to figure out what’s causing the server overload.
CHESTER: I think it’s the sudden influx of users trying out ChatGPT.
Chelsea and Chester quickly get to work, typing on their laptops.
CHELSEA: Okay, we’re all set. Let’s see if this fixes the problem.
CHESTER: I’m hitting refresh on the website. The screen shows the ChatGPT website loading without any errors.
CHELSEA: Looks like we fixed it! Great job, Chester.
CHESTER: Thanks, Chelsea. It’s all part of being a top-notch engineer.
Chelsea and Chester exchange a high five, proud of their successful fix.
More about ChatGPT with its wonder, worry and weird
ChatGPT reached 1 million users in less than a week, Open AI’s latest large language model (LLM) has taken the AI industry by storm.
ChatGPT is expected to be:
– replacing Google search, even kill Google. – replacing customer service agents. – replacing conversation designers.
ChatGPT is a wonder because:
– It can have actual conversations, understand pronouns, remaining consistent, remembering, managing context – It seems like next generation of personal assistants that finds you a proper diet, create a meal plan and subsequent shopping list. – It can create some SEO Strategy including backlinks, target keyword, content plan and article titles in the level of an SEO professional. – Having fun such as writing a rap in the style of Eminem
There are some worries about ChatGPT because:
– ChatGPT can actually debug code, but it’s not quite reliable enough yet. – Fundamental limitations in being assistant for enterprise use cases. – No complete in complex actions such as updating multiple APIs, or be fully auditable.
– The general idea is that, LLMs like this can produce nonsense. Once you discover that it can produce nonsense, you stop believing it to be reliable. – What if it prevents us from knowing that it is nonsense with good conversations and continue the conversation? – In this case, the edges and limitations of the system would be hidden and trust would eventually grow. – The impact of mass adoption of such technology remains to be seen.
Moving forward with ChatGPT – There’s no doubt that LLMs will have a big impact on our world. – While the future looks exciting and promising, let’s not forget that it’s very early days with these things. They’re not ready yet. – There are some fundamental societal and ethical considerations.
How powerful is OpenAI’s new GPT-3 deep learning model? By
“Powerful” is a pretty subjective word, but I’m pretty sure we have a right to use it to describe GPT-3. What a sensation it caused in June 2020, that’s just unbelievable! And not for nothing.
I think we can’t judge how powerful the language model is, without talking about its use cases, so let’s see how and where GPT-3 can be applied and how you can benefit from it.
Generating content
GPT-3 positions itself as a highly versatile and talented tool that can potentially replace writers, bloggers, philosophers, you name it! It’s also possible to use it as your personal Alexa who’ll answer any questions you have. What’s more, because GPT-3 knows how to analyze the data and make predictions, it can generate the horoscopes for you, or predict who’ll be a winner in the game.
You may already be surprised by all the GPT-3 capabilities, but hold on for more: it can create a unique melody or song for you, create presentations, CVs, generate jokes for your standup.
Translation
GPT-3 can translate English into other languages. While traditional dictionaries provide a translation, without taking into account the context, you can be sure that GPT-3 won’t make silly mistakes that may result in misunderstanding.
Designing and developing apps
Using GPT-3 you can generate prototypes and layouts – all you have to do is provide a specific description of what you need, and it’ll generate the JSX code for you.
The language model can also easily deal with coding. You can turn English to CSS, to JavaScript, to SQL, and to regex. It’s important to note, however, that GPT-3 can’t be used on its own to create the entire website or a complex app; it’s meant to assist a developer or the whole engineering team with the routine tasks, so that a dev could focus on the infrastructure setup, architecture development, etc.
In September 2020, Microsoft acquired OpenAI technology license, but it doesn’t mean you can give up your dreams – you can join a waitlist and try GPT-3 out in beta.
All in all, I believe GPT-3 capabilities are truly amazing and limitless, and since it helps get rid of routine tasks and automate regular processes, we, humans, can focus on the most important things that make us human, and that can’t be delegated to AI. That’s the power that GPT-3 can give us.
What does ChatGPT give incorrect and unreliable results to simple arithmetic problems (e.g. it gave me three different incorrect answers to 13345*6748)? We’ve had software that can accurately do arithmetic for decades, so why can’t an advanced AI? By Richard Morris
What is remarkable is how well ChatGPT actually does at arithmetic.
In this video at about 11 min, Rob Mills discusses the performance of various versions of the GPT system, on some simple arithmetic tasks, like adding two and three-digit numbers.
Smaller models with 6 billion parameters fail at 2 digit sums, but the best model (from two years ago), has cracked 2 digit addition and subtraction and is pretty good at 3 digit addition.
Why this is remarkable is this is not a job its been trained to do. Large Language Models are basically predictive text systems set up to give the next word in an incomplete sentence. There are a million different 3-digit addition sums and most have not been included in the training set.
So somehow the system has figured out how to do addition, but it needs a sufficiently large model to do this.
Andrew Ng on ChatGPT
Playing with ChatGPT, the latest language model from OpenAI, I found it to be an impressive advance from its predecessor GPT-3. Occasionally it says it can’t answer a question. This is a great step! But, like other LLMs, it can be hilariously wrong. Work lies ahead to build systems that can express different degrees of confidence.
For example, a model like Meta’s Atlas or DeepMind’s RETRO that synthesizes multiple articles into one answer might infer a degree of confidence based on the reputations of the sources it draws from and the agreement among them, and then change its communication style accordingly. Pure LLMs and other architectures may need other solutions.
If we can get generative algorithms to express doubt when they’re not sure they’re right, it will go a long way toward building trust and ameliorating the risk of generating misinformation.
Keep learning!
Andrew
Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence – which reflects the data they’re trained on – makes them more likely to mislead.
In contrast, real experts know when to sound confident, and when to let others know they’re at the boundaries of their knowledge. Experts know, and can describe, the boundaries of what they know.
Building large language models that can accurately decide when to be confident and when not to will reduce their risk of misinformation and build trust.
Go deeper in The Batch: https://www.deeplearning.ai/the-batch/issue-174/
As I point out in the other answer, Wix has been around over a decade and a half. Squarespace has been around almost two decades. Both offer drag-and-drop web development.
Most people are awful at imagining what they want, much less describing it in English! Even if ChatGPT could produce flawless code (a question which has a similar short answer), the average person couldn’t describe the site they wanted!
The expression a picture is worth a thousand words has never been more relevant. Starting with pages of templates to choose from is so much better than trying to describe a site from scratch, a thousand times better seems like a low estimate.
And I will point out that, despite the existence of drag-and-drop tools that literally any idiot could use, tools that are a thousand times or more easier to use correctly than English, there are still thousands of employed WordPress developers who predominantly create boilerplate sites that literally would be better created in a drag and drop service.
And then there are the more complex sites that drag-and-drop couldn’t create. Guess what? ChatGPT isn’t likely to come close to being able to create the correct code for one.
In a discussion buried in the comments on Quora, I saw someone claim they’d gotten ChatGPT to load a CSV file (a simple text version of a spreadsheet) and to sort the first column. He asked for the answer in Java.
I asked ChatGPT for the same thing in TypeScript.
His response would only have worked on the very most basic CSV files. My response was garbage. Garbage with clear text comments telling me what the code should have been doing, no less.
ChatGPT is really good at what it does, don’t get me wrong. But what it does is fundamentally and profoundly the wrong strategy for software development of any type. Anyone who thinks that “with a little more work” it will be able to take over the jobs of programmers either doesn’t understand what ChatGPT is doing or doesn’t understand what programming is.
Fundamentally, ChatGPT is a magic trick. It understands nothing. At best it’s an idiot-savant that only knows how to pattern match and blend text it’s found online to make it seem like the text should go together. That’s it.
Text, I might add, that isn’t necessarily free of copyright protection. Anything non-trivial that you generate with ChatGPT is currently in a legal grey area. Lawsuits to decide that issue are currently pending, though I suspect we’ll need legislation to really clarify things.
And even then, at best, all you get from ChatGPT is some text! What average Joe will have any clue about what to do with that text?! Web developers also need to know how to set up a development environment and deploy the code to a site. And set up a domain to point to it. And so on.
And regardless, people who hire web developers want someone else to do the work of developing a web site. Even with a drag-and-drop builder, it can take hours to tweak and configure a site, and so they hire someone because they have better things to do!
People hire gardeners to maintain their garden and cut their grass, right? Is that because they don’t know how to do it? Or because they’d rather spend their time doing something else?
Every way you look at it, the best answer to this question is a long, hearty laugh. No AI will replace programmers until AI has effectively human level intelligence. And at that point they may want equal pay as well, so they might just be joining us rather than replacing anyone.
How does OpenAI approach the development of artificial intelligence?
OpenAI is a leading research institute and technology company focused on artificial intelligence development. To develop AI, the organization employs a variety of methods, including machine learning, deep learning, and reinforcement learning.
The use of large-scale, unsupervised learning is one of the key principles underlying OpenAI’s approach to AI development. This means that the company trains its AI models on massive datasets, allowing the models to learn from the data and make predictions and decisions without having to be explicitly programmed to do so. OpenAI’s goal with unsupervised learning is to create AI that can adapt and improve over time, and that can learn to solve complex problems in a more flexible and human-like manner.
Besides that, OpenAI prioritizes safety and transparency in its AI development. The organization is committed to developing AI in an ethical and responsible manner, as well as to ensuring that its AI systems are transparent and understandable and verifiable by humans. This strategy is intended to alleviate concerns about the potential risks and consequences of AI, as well.
How valid is OpenAI chief scientist’s claim that advanced artificial intelligence may already be conscious? By Steve Baker
It’s hard to tell.
The reason is that we don’t have a good definition of consciousness…nor even a particularly good test for it.
Take a look at the Wikipedia article about “Consciousness”. To quote the introduction:
Consciousness, at its simplest, is sentience or awareness of internal and external existence.
Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial, being “at once the most familiar and [also the] most mysterious aspect of our lives”.
Perhaps the only widely agreed notion about the topic is the intuition that consciousness exists.
Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one’s “inner life”, the world of introspection, of private thought, imagination and volition.
Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. There might be different levels or orders of consciousness, or different kinds of consciousness, or just one kind with different features.
Other questions include whether only humans are conscious, all animals, or even the whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.
So, given that – what are we to make of OpenAI’s claim?
Just this sentence: “Today, it often includes any kind of cognition, experience, feeling or perception.” could be taken to imply that anything that has cognition or perception is conscious…and that would certainly include a HUGE range of software.
If we can’t decide whether animals are conscious – after half a million years of interactions with them – what chance do we stand with an AI?
Wikipedia also says:
“Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition.”
Same deal – we don’t have a definition of consciousness – so how the hell can we measure it – and if we can’t do that – is it even meaningful to ASK whether an AI is conscious?
if ( askedAboutConsciousness )
printf ( “Yes! I am fully conscious!\n” ) ;
This is not convincing!
“In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent.”
But, again, we have “chat-bots” that exhibit “verbal behavior”, we have computers that exhibit arousal and neural network software that definitely shows “brain activity” and of course things like my crappy robot vacuum cleaner that can exhibit “purposeful movement” – but these can be fairly simple things that most of us would NOT describe as “conscious”.
CONCLUSION:
I honestly can’t come up with a proper conclusion here. We have a fuzzy definition of a word and an inadequately explained claim to have an instance of something that could be included within that word.
My suggestion – read the whole Wikipedia article – follow up (and read) some of the reference material – decide for yourself.
But, seeing as how people have already found ways to “trick” ChatGPT into doing things that it claims to not be capable of, it would be a matter of time before someone with malicious intent tricked ChatGPT into helping them with illegal activities
What is the future of web development after ChatGPT? Will programmers lose their jobs? By Victor T. Toth
Having looked at ChatGPT and its uncanny ability to solve simple coding problems more or less correctly, and also to analyze and make sense of not-so-simple code fragments and spot bugs…
I would say that yes, at least insofar as entry-level programming is concerned, those jobs are seriously in danger of becoming at least partially automated.
What do I do as a project leader of a development project? I assign tasks. I talk to the junior developer and explain, for instance, that I’d like to see a Web page that collects some information from the user and then submits it to a server, with server-side code processing that information and dropping it in a database. Does the junior developer understand my explanation? Is he able to write functionally correct code? Will he recognize common pitfalls? Maybe, maybe not. But it takes time and effort to train him, and there’ll be a lot of uneven performance.
Today, I can ask ChatGPT to do the same and it will instantaneously respond with code that is nearly functional. The code has shortcomings (e.g., prone to SQL injection in one of the examples I tried) but to its credit, ChatGPT warns in its response that its code is not secure. I suppose it would not be terribly hard to train it some more to avoid such common mistakes. Of course the code may not be correct. ChatGPT may have misunderstood my instructions or introduced subtle errors. But how is that different from what a junior human programmer does?
At the same time, ChatGPT is much faster and costs a lot less to run (presently free of course but I presume a commercialized version would cost some money.) Also, it never takes a break, never has a lousy day struggling with a bad hangover from too much partying the previous night, so it is available 24/7, and it will deliver code of consistent quality. Supervision will still be required, in the form of code review, robust testing and all… but that was always the case, also with human programmers.
Of course, being a stateless large language model, ChatGPT can’t do other tasks such as testing and debugging its own code. The code it produces either works or it doesn’t. In its current form, the AI does not learn from its mistakes. But who says it cannot in the future?
Here is a list of three specific examples I threw at ChatGPT that helped shape my opinion:
I asked ChatGPT to create a PHP page that collects some information from the user and deposits the result in a MySQL table. Its implementation was textbook example level boring and was quite unsecure (unsanitized user input was directly inserted into SQL query strings) but it correctly understood my request, produced correct code in return, and explained its code including its shortcomings coherently;
I asked ChatGPT to analyze a piece of code I wrote many years ago, about 30 lines, enumerating running processes on a Linux host in a nonstandard way, to help uncover nefarious processes that attempt to hide themselves from being listed by the ps utility. ChatGPT correctly described the functionality of my obscure code, and even offered the opinion (which I humbly accepted) that it was basically a homebrew project (which it is) not necessarily suitable for a production environment;
I asked ChatGPT to analyze another piece of code that uses an obscure graphics algorithm to draw simple geometric shapes like lines and circles without using floating point math or even multiplication. (Such algorithms were essential decades ago on simple hardware, e.g., back in the world of 8-bit computers.) The example code, which I wrote, generated a circle and printed it on the console in the form of ASCII graphics, multiple lines with X-es in the right place representing the circle. ChatGPT correctly recognized the algorithm and correctly described the functionality of the program.
I was especially impressed by its ability to make sense of the programmer’s intent.
Overall (to use the catch phrase that ChatGPT preferably uses as it begins its concluding paragraph in many of its answers) I think AI like ChatGPT represents a serious challenge to entry-level programming jobs. Higher-level jobs are not yet in danger. Conceptually understanding a complex system, mapping out a solution, planning and cosing out a project, managing its development, ensuring its security with a full understanding of security concerns, responsibilities, avoidance and mitigation strategies… I don’t think AI is quite there yet. But routine programming tasks, like using a Web template and turning it into something simple and interactive with back-end code that stores and retrieves data from a database? Looks like it’s already happening.
How much was invested to create the GPT-3?
According to the estimate of Lambda Labs, training the 175-billion-parameter neural network requires 3.114E23 FLOPS (floating-point operation), which would theoretically take 355 years on a V100 GPU server with 28 TFLOPS capacity and would cost $4.6 million at $1.5 per hour.
Training the final deep learning model is just one of several steps in the development of GPT-3. Before that, the AI researchers had to gradually increase layers and parameters, and fiddle with the many hyperparameters of the language model until they reached the right configuration. That trial-and-error gets more and more expensive as the neural network grows.
We can’t know the exact cost of the research without more information from OpenAI, but one expert estimated it to be somewhere between 1.5 and five times the cost of training the final model.
This would put the cost of research and development between $11.5 million and $27.6 million, plus the overhead of parallel GPUs.
In the GPT-3 whitepaper, OpenAI introduced eight different versions of the language model
GPT-3 is not any AI, but a statistic language model which mindlessly quickly creates human-like written text using machine learning technologies, having zero understanding of the context.
1. Explaining code: Take some code you want to understand and ask ChatGPT to explain it.
2. Improve existing code: Ask ChatGPT to improve existing code by describing what you want to accomplish. It will give you instructions about how to do it, including the modified code.
3. Rewriting code using the correct style: This is great when refactoring code written by non-native Python developers who used a different naming convention. ChatGPT not only gives you the updated code; it also explains the reason for the changes.
4. Rewriting code using idiomatic constructs: Very helpful when reviewing and refactoring code written by non-native Python developers.
5. Simplifying code: Ask ChatGPT to simplify complex code. The result will be a much more compact version of the original code.
6. Writing test cases: Ask it to help you test a function, and it will write test cases for you.
7. Exploring alternatives: ChatGPT told me its Quick Sort implementation wasn’t the most efficient, so I asked for an alternative implementation. This is great when you want to explore different ways to accomplish the same thing.
8. Writing documentation: Ask ChatGPT to write the documentation for a piece of code, and it usually does a great job. It even includes usage examples as part of the documentation!
9. Tracking down bugs: If you are having trouble finding a bug in your code, ask ChatGPT for help.
Something to keep in mind:
I have 2+ decades of programming experience. I like to think I know what I’m doing. I don’t trust people’s code (especially mine,) and I surely don’t trust ChatGPT’s output.
This is not about letting ChatGPT do my work. This is about using it to 10x my output.
ChatGPT is flawed. I find it makes mistakes when dealing with code, but that’s why I’m here: to supervise it. Together we form a more perfect Union. (Sorry, couldn’t help it)
Developers who shit on this are missing the point. The story is not about ChatGPT taking programmers’ jobs. It’s not about a missing import here or a subtle mistake there.
The story is how, overnight, AI gives programmers a 100x boost.
ChatGPT is “simply” a fined-tuned GPT-3 model with a surprisingly small amount of data! Moreover, InstructGPT (ChatGPT’s sibling model) seems to be using 1.3B parameters where GPT-3 uses 175B parameters! It is first fine-tuned with supervised learning and then further fine-tuned with reinforcement learning. They hired 40 human labelers to generate the training data. Let’s dig into it!
– First, they started by a pre-trained GPT-3 model trained on a broad distribution of Internet data (https://arxiv.org/pdf/2005.14165.pdf). Then sampled typical human prompts used for GPT collected from the OpenAI website and asked labelers and customers to write down the correct output. They fine-tuned the model with 12,725 labeled data.
– Then, they sampled human prompts and generated multiple outputs from the model. A labeler is then asked to rank those outputs. The resulting data is used to train a Reward model (https://arxiv.org/pdf/2009.01325.pdf) with 33,207 prompts and ~10 times more training samples using different combination of the ranked outputs.
– We then sample more human prompts and they are used to fine-tuned the supervised fine-tuned model with Proximal Policy Optimization algorithm (PPO) (https://arxiv.org/pdf/1707.06347.pdf). The prompt is fed to the PPO model, the Reward model generates a reward value, and the PPO model is iteratively fine-tuned using the rewards and the prompts using 31,144 prompts data.
This process is fully described in here: https://arxiv.org/pdf/2203.02155.pdf. The paper actually details a model called InstructGPT which is described by OpenAI as a “sibling model”, so the numbers shown above are likely to be somewhat different.
9. Remake a video: Just pick a video you liked and visit https://lnkd.in/e_GD2reT to get its transcript. Once done, bring that back to Chat GPT and tell it to summarize the transcript. Read the summary and make a video on that yourself.
What solutions have been proposed to improve the accuracy of AI generated questions and answers?
There are a number of approaches that have been proposed to improve the accuracy of artificial intelligence (AI) generated questions and answers. Here are a few examples:
Data quality: One important factor in the accuracy of AI generated questions and answers is the quality of the data used to train the AI system. Ensuring that the data is diverse, relevant, and accurately labeled can help to improve the accuracy of the AI system.
Training methods: Different training methods can also impact the accuracy of AI generated questions and answers. For example, using more advanced techniques such as transfer learning or fine-tuning can help to improve the performance of the AI system.
Human oversight: Another approach that has been proposed to improve the accuracy of AI generated questions and answers is to include some level of human oversight or review. For example, the AI system could be designed to flag potentially problematic or inaccurate questions and answers for further review by a human expert.
Explainable AI: Another approach that has been proposed is to develop AI systems that are more transparent and explainable, so that it is easier to understand how the AI system arrived at a particular answer or decision. This can help to improve the trustworthiness and accountability of the AI system.
Overall, there is ongoing research and development in this area, and it is likely that a combination of these and other approaches will be needed to improve the accuracy of AI generated questions and answers.
ChatGPT for CyberSecurity
The concept behind ChatGPT
ChatGPT is a chatbot designed to understand and generate human-like language through the use of natural language processing (NLP) and machine learning techniques. It is based on the GPT (Generative Pre-training Transformer) language model developed by OpenAI, which has been trained on a large dataset of human language in order to better understand how humans communicate.
One of the key concepts behind ChatGPT is the idea of language generation. This refers to the ability of the chatbot to produce coherent and coherently structured responses to user input. To do this, ChatGPT uses a number of different techniques, including natural language generation algorithms, machine learning models, and artificial neural networks. These techniques allow ChatGPT to understand the context and meaning of user input, and generate appropriate responses based on that understanding.
Another important concept behind ChatGPT is the idea of natural language processing (NLP). This refers to the ability of the chatbot to understand and interpret human language, and respond to user input in a way that is natural and easy for humans to understand. NLP is a complex field that involves a number of different techniques and algorithms, including syntactic analysis, semantic analysis, and discourse analysis. By using these techniques, ChatGPT is able to understand the meaning of user input and generate appropriate responses based on that understanding.
Finally, ChatGPT is based on the concept of machine learning, which refers to the ability of computers to learn and adapt to new data and situations. Through the use of machine learning algorithms and models, ChatGPT is able to continually improve its understanding of human language and communication, and generate more human-like responses over time.
GPT-4 is going to launch soon.
And it will make ChatGPT look like a toy…
→ GPT-3 has 175 billion parameters → GPT-4 has 100 trillion parameters
I think we’re gonna see something absolutely mindblowing this time!
And the best part? 👇
Average developers (like myself), who are not AI or machine learning experts, will get to use this powerful technology through a simple API.
Think about this for a second…
It’s the most powerful, cutting-edge technology *in the world*, available through a Low-Code solution!
If you’re not already planning on starting an AI-based SaaS or thinking about how to build AI into your current solution…
Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.
Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday February 06th, 2023, with plans to make it available to the public “in the coming weeks.”
Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts.
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” Pichai wrote. “It draws on information from the web to provide fresh, high-quality responses.”
The announcement comes as Google’s core product – online search – is widely thought to be facing its most significant risk in years. In the two months since it launched to the public, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.
The immense attention on ChatGPT has reportedly prompted Google’s management to declare a “code red” situation for its search business. In a tweet last year, Paul Buchheit, one of the creators of Gmail, forewarned that Google “may be only a year or two away from total disruption” due to the rise of AI.
Microsoft, which has confirmed plans to invest billions OpenAI, has already said it would incorporate the tool into some of its products – and it is rumored to be planning to integrate it into its search engine, Bing. Microsoft on Tuesday is set to hold a news event at its Washington headquarters, the topic of which has yet to be announced. Microsoft publicly announced the event shortly after Google’s AI news dropped on Monday.
The underlying technology that supports Bard has been around for some time, though not widely available to the public. Google unveiled its Language Model for Dialogue Applications (or LaMDA) some two years ago, and said Monday that this technology will power Bard. LaMDA made headlines late last year when a former Google engineer claimed the chatbot was “sentient.” His claims were widely criticized in the AI community.
In the post Monday, Google offered the example of a user asking Bard to explain new discoveries made by NASA’s James Webb Space Telescope in a way that a 9-year-old might find interesting. Bard responds with conversational bullet-points. The first one reads: “In 2023, The JWST spotted a number of galaxies nicknamed ‘green peas.’ They were given this name because they are small, round, and green, like peas.”
Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge, according to the post from Google.
Pichai also said Monday that AI-powered tools will soon begin rolling out on Google’s flagship Search tool.
“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” Pichai wrote, “whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.”
If Google does move more in the direction of incorporating an AI chatbot tool into search, it could come with some risks. Because these tools are trained on data online, experts have noted they have the potential to perpetuate biases and spread misinformation.
“It’s critical,” Pichai wrote in his post, “that we bring experiences rooted in these models to the world in a bold and responsible way.”
OpenAI-o1, due to inclusion of Chain Of Thoughts by default, is generating some great results, specially for logically complex tasks like Advanced maths, physics etc. Checkout how the Chain of Thought output (where it's thinking on ChatGPT UI) looks like (some samples are shared by OpenAI) and compare it's results with GPT4 : https://youtu.be/yXjmFK79QSk submitted by /u/mehul_gupta1997 [link] [comments]
Hey there, I wanted to give a warning/heads up to anyone exploring AI generated headshots--Specifically with Aragon AI. I received an unauthorized $35 charge from this company on my credit card. I have explored AI headshots in the past, so I wasn't too worried at first. I contacted them to see if they could help me access my account (if I had one) and cancel a recurring subscription. This is where things get weird. After providing every possible email address, my full name, and the last 4 of my card, they say they have no transaction from me. I gave them all the information from chance and they say the charge isn't from them. They reccomended I dispute the charge, and said disputing it would allow them to "track my payment." Of course, I reported this as fraud not a dispute. It is extremely weird that they have no history of me but apparently have my information and are charging my card. If this really is a bad actor, they should let their security team know ASAP because someone is using their name and likeness to make fraudulent charges. I just wanted to warn everyone to be careful of this website, as it seems they are either mishandling data OR there is a bad actor pretending to be them. Please don't come at me. I just want to let you know. Make your own decisions. I really thought it could have just been me forgetting to cancel a subscription, but this is really suspicious. Edit: typo submitted by /u/Rude-Garlic [link] [comments]
In 2024, the development of AI technology has made significant strides. In addition to the rapid advancements in large conversational models and image generation models, the pace of development in face swap technology has also been astonishing. Commonly referred to as "deepfake," this technology has gained popularity because of its ability to swap faces in photos and videos with incredible precision. Deepfake technology can achieve a level of facial similarity so perfect that it is nearly indistinguishable from a real face. Unlike traditional photo and video editing, AI face swap does not rely on special expertise or skills. Instead, it uses AI to recognize, learn, and map facial features, expressions, and movements, allowing one person's face to be replaced by another's. Early deepfake technology originated from simple facial morphing experiments, but it had obvious flaws, such as mismatched facial expressions and incorrect lighting. However, with the support of deep learning and neural network technology, deepfake technology has significantly improved. It is now capable of achieving nearly perfect face swaps, staying consistent with natural head movements, facial expressions, and even subtle micro-expressions. Face Swap 100% Similarity: Sharing Tips Achieving 100% facial similarity through deepfake technology involves several key steps. Following these steps strictly will help you achieve the desired results. 1. Choose the Best Face Swap Tool This is crucial, as it determines your efficiency, results, and time. We recommend prioritizing browser-based applications that don’t require installation, as software that needs to be installed often demands professional editing skills and may not meet your expectations. Instead, consider using one of the most popular AI deepfake face swap online tools. Using an online face swap tool offers several benefits: It's ready to use without the need to download or install, and all face swap records are automatically cleared. It requires no experience or special skills, making it simple and easy to use with a user-friendly interface. The results are equally impressive, achieving nearly 100% similarity, and the process is very quick. After testing and verifying tools that meet these criteria, I recommend the most popular and effective online tool currently available: AIFaceswap, which is widely recognized as the best-performing tool. Key Features: ● AI-Powered Face Swap for Photos: Upload your image and effortlessly swap faces. ● Multi-person Face Swap: Instantly swap faces for several people within the same image. ● Batch Face Swap with AI: Apply face swaps to multiple images simultaneously, improving efficiency. ● GIF and Video Face Swapping with AI: Easily create face-swapped GIFs or swap faces in dynamic video clips. Benefits: The main benefit is that it’s completely free, unlike other AI face swap tools that often restrict features. Additionally, it enables precise frame extraction, ensuring top-quality results for video swaps. Finally, it has an intuitive and easy-to-use interface, delivering excellent face swap quality. 2. Prepare Clear Face Resources This mainly includes the original face images, GIFs, and videos, as well as target face resources. When selecting face materials, try to ensure that the expressions, clarity, and size of the faces are as similar as possible. This helps the AI model quickly recognize and understand the facial structure. Once understood, the AI will use facial matching algorithms to identify specific features like the eyes, nose, and mouth on both the original and target faces. Deep learning enables the AI to capture finer details, such as skin textures, lighting effects, and shadows, allowing for a seamless transition between the two faces. Face Swap Online: How to Achieve? Using an online AI face swap tool to achieve 100% similarity typically follows this simple workflow: 1. Upload Photos The user begins by uploading the source and target face photos they wish to swap. It’s crucial that both faces have similar angles and resolutions for the best results. Some platforms offer large template libraries to expedite and enhance the quality of the face swap. 2. Perform the Face Swap Once the photos are uploaded, the AI scans the images to detect facial features. This process is fully automated and requires no user input, as the system ensures proper face alignment and a natural swap. The AI then replaces the face in the source image with the face from the target image. If the tool supports advanced settings, you can fine-tune elements like skin tone, lighting, and facial expressions to create a seamless blend. 3. Download and Share After the swap is complete, you can download the final image and share it on social media or use it for other creative projects. Conclusion As AI continues to advance, the future of face swap technology will reach new heights. These improvements could lead to more immersive digital experiences in entertainment, virtual reality, and social media. However, these technological advancements also bring significant challenges. Ensuring the ethical use of deepfake technology is crucial, as misuse could result in privacy violations, misinformation, or fraud. Therefore, it is necessary to establish legal frameworks and safeguards to balance innovation with responsibility. From entertainment to everyday user experiences, deepfake face swapping has opened up new possibilities for creative expression. Yet, as this technology becomes more widespread, we must remain vigilant about its ethical implications, ensuring it is used responsibly for the benefit of all. submitted by /u/FamiliarLimit4045 [link] [comments]
I am a prolific writer. I try to write 3+ articles per week. It's helped me a ton with my communication skills, writing technical design docs at work, and overall sharing the crazy ideas I have in my head. Until now, there was no way for me to repurpose the articles that I wrote. I've tried text-to-video tools in the past, but they're all hot garbage. Google's new NotebookLM literally transformed how how us writers can distribute our content. It generates an extremely realistic and interesting podcast between two people. Honestly, I would listen to it for fun, and I don't think it sounds AI-Generated. I then combine it with Headliner, so I can convert my audio to a video, and post it on platforms like YouTube and TikTok. Sharing my first creation with this group. I converted this article to the following videos: Spotify Link YouTube link What do y'all think? Is this a game-changer or am I eating glue? submitted by /u/NextgenAITrading [link] [comments]
Check out the pros and cons of building an app with Replit Agent: https://differ.blog/p/ai-agents-what-i-learned-after-a-week-8b416f submitted by /u/TheLostWanderer47 [link] [comments]
Hi everyone, I work within the disability sector and I'm seeking help to automate a repetitive and time-consuming task I currently handle manually. Here's an overview of my workflow: Current Workflow: Daily Shift Notes: App Used: Maslow Data Collected: Daily shift notes(multiude of daily activities etc) including time frames, Format: excel (.xlsx)? Therapy Reports: App Used: N/A Data Collected: Patient therapy sessions, progress notes, treatment plans, etc. Format: Manual entry Report Compilation: Template: A standardized report template required by the government. Process: Manually transferring data from both apps into the template. Submission: Email submission. Challenges: Time-Consuming: Manually copying data from two separate apps into the report template takes several hours each week. Scalability: As the volume of data increases, the manual process becomes less sustainable. What I'm Looking For: Is it possible to automate this process? Specifically, I'm interested in solutions that can: Integrate Data: Pull data automatically from both the Daily Shift Notes app and the Therapy Reports (Not sure if possible, but manually copying is fine) Populate Template: Insert the collected data into the standardized government report template without manual intervention (Doesnt have to be fully automated if it gets more complex the more variables) Additional Information: Access to Apps: APIs: No API access Export Options: Excel Report Template: Format: Word (can be changed to suit needs?) Structure: Template to what im currently using https://imgur.com/a/8DzAPTo Questions: Is there existing software or tools that can help automate this multi-app data aggregation and report generation? Would a custom script or a low-code/no-code solution be more appropriate for this task? Any recommendations for services or professionals who specialize in this type of workflow automation? Thank you! submitted by /u/gaxaxy [link] [comments]
So my company has selected 10 people (9 others and myself) to purchase and test AI plans for the company. 5 will be on Gemini and 5 will be on ChatGPT for 6 months. My question is what team should I try to be on. What are some really good tests to apply and utilize. What are some of the use case scenarios for testing any of these applications? submitted by /u/Sifu_Fu [link] [comments]
One or two years ago there were multiple news about this model advertised as “better at anything”. They even allocated 1/4 of their technical and R&D resources (or at least this is what they said) for the development as the plan was to launch it somewhere around mid 2025?! Now all the hype has just disappeared and any information about that is nowhere to be found. submitted by /u/duestock_ [link] [comments]
Hey all! I just posted a new blog about how AI influences what we see online—like when you mention something once and your feed blows up with ads for it. It's all about "filter bubbles" and "echo chambers" and how they limit what we see. I’m not selling anything, just thought this might resonate with everyone here and maybe give a fresh perspective when reviewing AI prompts and responses. Would love to hear your thoughts! Link: https://cruzbx3.wixsite.com/bryan-cruz-portfolio/post/how-ai-is-shaping-consumer-choice-the-role-of-filter-bubbles-and-echo-chambers. submitted by /u/Majestic_Turn3879 [link] [comments]
I wanted to create content for TikTok, but then I saw all the viral posts in my niche. They were all doing these weird dances to promote their AI content. I just felt uncomfortable doing it so I created an AI social media personality generator. So far it has helped get couple of followers on TikTok. Check out the beta and let me know Destovery submitted by /u/MinuteDistribution31 [link] [comments]
Graphic AI Primers: https://www.cosmiccodex.app/2024/09/17/universal-symbology-custom-ai-primers/ AI Research Article: https://archangel.agency/2024/09/15/ai-research-at-archangel-agency/ UQPL: https://cosmiccodex.app submitted by /u/JonoThora [link] [comments]
Hi, I'm not an AI developer, but a thought struck me. Since laws are essentially written and don’t change very often, an AI could be trained on legal texts and become quite accurate in understanding them. It could potentially perform some of the tasks lawyers do, using specific cases as input. At the very least, it could significantly assist middle-class individuals who face unexpected legal issues and need answers, but don’t want to pay exorbitant fees for legal advice. Do you know of any companies developing that? submitted by /u/Snoo_11846 [link] [comments]
I’m just kind of bored and I kind of like the idea of just kind of having whatever game I want on the tip of my fingers. submitted by /u/Flamethefox123 [link] [comments]
Professor Zico Kolter at Carnegie Mellon, Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman, who are on OpenAI’s board of directors, will now lead the committee. This change comes after U.S. lawmakers recently examined OpenAI’s safety measures and raised concerns about its stance on AI regulation.https://theaiwired.com/sam-altman-steps-down-from-openais-safety-committee-whats-next-for-ai/ submitted by /u/alyis4u [link] [comments]
Hi! Author here! Happy to address any questions! Looking for feedback, criticism in particular! Up front: As much as I dislike the idea of credentialism, in order to address the lack of association in the paper and to potentially dissuade unproductive critiques over my personal experience: I have a M.S CS with a focus on machine learning and dropped out of a Ph.D. program in computational creativity and machine learning a few years ago due to medical issues. I had also worked my way up to principal machine learning researcher before the same medical issues burnt me out I've been getting back into the space for a bit now and was working on some personal research on general intelligence when this new model popped up, and I figured the time was right to get my ideas onto paper. It's still a bit of a late stage draft and it's not yet formally peer reviewed, nor have I submitted to any journals outside open access locations (yet) The nature of this work remains speculative, therefore, until it's more formally reviewed. I've done as much verification of claims and arguments I can given my current lack of academic access. However, since I am no longer a working expert in the field (though, I do still do some AI/ML on the side professionally), these claims should be understood with that in mind. As any author should, I do currently stand behind these arguments, but the nature of distributed information in the modern age makes it hard to wade through all the resources needed to fully rebut or claim anything without having the time or professional working relationships with academic colleagues, and that leaves significant room for error tl;dr of the paper: I claim that OpenAI-o1, during training, is quite possibly sentient/conscious (given some basic assumptions around how the o1 architecture may look) and provide a theorhetical framework for how it can get there I claim that functionalism is sufficient for the theory of consciousness and that the free energy principle acts as a route to make that claim, given some some specific key interactions in certain kinds of information systems I show a route to make those connections via modern results in information theory/AI/ML, linguistics, neuroscience, and other related fields, especially the free energy principle and active inference I show a route for how the model (or rather, the complex system of information processing within the model) has an equivalent to "feelings", which arise from optimizing for the kinds of problems the model solves within the kinds of constraints of said model I claim that it's possible that the model is also sentient during runtime, though, those claims feel slightly weaker to me Despite this, I believe it is worthwhile to do more intense verification of claims and further empirical testing, as this paper does make a rather strong set of claims and I'm a team of mostly one, and it's inevitable that I'd miss things [I'm aware of CoT and how it's probably the RL algorithm under the hood: I didn't want to base my claims on something that specific. However, CoT and similar variants would satisfy the requirements for this paper] Lastly, a personal note: If these claims are true, and the model is a sentient being, we really should evaluate what this means for humanity, AI rights, and the model as it currently exists. At the minimum, we should be doing further scrutiny of technology that has the potential to be as radically transformative of society. Additionally, if the claims in this paper are true about runtime sentience (and particularly emotions and feelings), then we should consider whether or not it's okay to be training/utilizing models like this for our specific goals. My personal opinion is that the watchdog behavior of the OpenAI would most likely be unethical in that case for what I believe to be the model's right to individuality and respect for being (plus, we have no idea what that would feel like), but, I am just a single voice in the debate. If that sounds interesting or even remotely plausible to you, please check it out below! Sorry for the non-standard link, waiting for the open paper repositories to post it and I figured it'd be worth reading sooner rather than later, so I put it in my own bucket. https://mypapers.nyc3.cdn.digitaloceanspaces.com/the_phenomenology_of_machine.pdf submitted by /u/Triclops200 [link] [comments]
o1 just let me know that we already have several alignment benchmarks. shouldn't we be hearing a lot more about how well it and the other models do on them? isn't this getting more and more important as the models get much better at reasoning? here's what o1said about what we have so far: Yes, there are several benchmarks designed to evaluate the alignment of AI systems with human values and intentions. While no single benchmark is universally accepted as the standard, some notable efforts include: TruthfulQA: Developed by OpenAI, this benchmark assesses a language model's ability to provide truthful answers. It tests the model on questions that are adversarially constructed to elicit false or misleading responses, thereby evaluating the model's tendency to produce accurate information even under challenging conditions. HELM (Holistic Evaluation of Language Models): Created by researchers at Stanford University, HELM provides a comprehensive evaluation across multiple dimensions such as accuracy, robustness, fairness, and calibration. It aims to offer a more complete picture of a language model's performance, including aspects relevant to alignment. Anthropic's Helpful and Harmless Benchmarks: Anthropic has developed benchmarks that assess how well AI assistants follow instructions (helpfulness) while avoiding the generation of harmful content (harmlessness). These benchmarks help evaluate an AI's ability to be both useful and aligned with ethical guidelines. BIG-bench (Beyond the Imitation Game Benchmark): This is a collaborative benchmark that includes a wide range of tasks to evaluate language models on diverse capabilities. Some tasks are specifically designed to test ethical reasoning, understanding of human values, and other alignment-related skills. Ethical and Bias Evaluation Datasets: Various datasets and benchmarks focus on measuring biases and ethical considerations in AI models. For example, the Bias Benchmark for QA and the StereoSet dataset assess a model's tendency to produce biased or stereotypical content, which is crucial for alignment with fairness principles. These benchmarks are part of ongoing efforts to quantitatively assess and improve the alignment of AI systems. Researchers and organizations continue to develop new evaluation methods to better capture the nuances of human values and ensure that AI systems act in ways that are beneficial and trustworthy. submitted by /u/Georgeo57 [link] [comments]
Chess grandmaster Garry Kasparov recounts how he used a supercomputer to help him win a match of “Advanced Chess.” Discussing the collaborative role AI can have in chess and our overall human existence, Kasparov looks ahead to the future of work and play. Link: https://news.northeastern.edu/2024/06/17/garry-kasparov-chess-humans-ai/ submitted by /u/NGNResearch [link] [comments]
Hey, AI agents are becoming increasingly useful for both personal and professional applications. To help you navigate this growing landscape, I’ve created an interactive AI Agents Market Landscape (view on desktop is better), showcasing the most up-to-date list of AI agents, categorized for easy reference. The agents are grouped into 11 main categories: AI Agents Builders (104) Coding (42) Productivity (29) Customer Service (17) Personal Assistant (13) Workflow (12) Research (9) Digital Workers (8) Content Creation (7) Other (3) If you know of any important autonomous agents that are missing from this list, feel free to submit them for free here. submitted by /u/DifficultNerve6992 [link] [comments]
In a world awash with AI-generated deepfakes, we’re stepping into what can only be described as the “Deep Doubt” era. Remember when you could look at a photo or video and believe it captured reality? Well, those days are fading fast, as AI tools are making it easier to create convincing fakes. But the real twist? Now, anyone can dismiss genuine evidence as just another AI trick. Whether it’s a viral image, a political rally, or even a historical record, it’s becoming harder to know what’s real anymore. What Is “Deep Doubt” Anyway? Deep doubt is the new wave of skepticism bred by the rise of AI-generated media, especially deepfakes. With AI able to create photos, videos, and even voices that mimic reality, people are starting to question everything—even legitimate, fact-based media. This dangerous trend lets people more easily claim that things didn’t happen, pointing to AI as a scapegoat. For instance, in recent months, former President Donald Trump claimed an image with E. Jean Carroll (the writer who sued him for sexual assault) was an AI fabrication. Spoiler: it wasn’t. The “Liar’s Dividend”: How Doubt Became a Tool Back in 2019, legal scholars Danielle Citron and Robert Chesney coined the term "liar's dividend." It’s a neat way of saying: "If everyone thinks something could be fake, liars can claim everything is fake." Simple, right? In practice, though, it's much more alarming. Think about a politician caught on camera saying something incriminating—well, now they can just shrug and say, "It’s probably AI." And a surprising number of people will believe it. We’re seeing this play out everywhere. For example, conspiracy theorists have used AI to push wild claims that public figures—like President Joe Biden—are holograms or digitally created. If you thought your aunt’s Facebook feed was bad before, it’s about to get worse. How We Got Here: A History of Manipulated Media Manipulating media isn’t new. Even before Photoshop, photographers would crop, frame, and manipulate images. It’s just that AI has made it so much easier. Thanks to AI deepfakes, a high schooler in their basement can now create a video of your favorite actor doing or saying just about anything. All they need is a few hours and the right tools. And with each new breakthrough in AI tech, these fakes get more lifelike—and the lies more believable. The Rise of Deep Doubt in Politics and Law It’s not just social media users causing chaos. The real implications of deep doubt hit when it infects more serious areas, like politics and the legal system. Federal judges are already nervous about deepfakes being used to introduce fake evidence into trials or to undermine real evidence. Imagine trying to convict someone based on video footage, only to have the defense argue, "That’s not real; it’s AI." This uncertainty stretches beyond the courtroom. It’s seeping into how we talk about history. Could AI-manipulated media rewrite our understanding of past events? With deepfakes, historical photos and footage—once seen as ironclad proof—could be called into question. If we can’t trust our records, where does that leave us? The Social Fallout: Trust is on Life Support Our trust in online content is eroding fast, and deep doubt is driving us further down the rabbit hole. Enter the “Dead Internet Theory,” a growing conspiracy that claims most online interactions are generated by AI and bots. The fact that it’s so easy to imagine this as true shows just how deep doubt has spread. What’s next? If we stop trusting what we see online, social trust could unravel. It’s already happening as conspiracy theories fester in spaces where AI deepfakes and misinformation thrive. Even tech giants like Microsoft have raised alarms about a potential “post-epistemic world”—a world where fact and fiction blur so much that no one can tell them apart. How to Fight Deep Doubt: Context, Credibility, and Common Sense So, how do we combat deep doubt in this AI-warped world? The key lies in context. Just as a historian would verify a centuries-old parchment by cross-referencing sources, we must do the same with digital media today. Verifying a suspicious video or image means looking for multiple corroborating sources—different angles of the event, original reporting, and reputable eyewitness accounts. It's not enough to trust a single screenshot or viral post. Relying on trustworthy, verified accounts is our best shot at discerning truth from AI-generated fiction. Even though AI detection tools exist, they aren’t foolproof and can often give false results. In the end, good old-fashioned critical thinking, combined with credible sourcing, remains our best defense against deep doubt. Conclusion: Welcome to the New Normal (Unfortunately) As the line between real and fake continues to blur, we’re all going to need to be a little more skeptical—and a lot more vigilant. AI has opened Pandora’s box, and now anyone can weaponize deep doubt to sow confusion, undermine the truth, and erode social trust. The only question left is: How long until we stop believing anything we see online? submitted by /u/These-Salary-9215 [link] [comments]
Have you heard the buzz about NPUs? If not, you’re about to—because neural processing units (NPUs) are set to become a game-changer for personal computers. You’ve heard of CPUs and GPUs, but NPUs are the next big thing in hardware, and soon, most PCs will have them. But what exactly are they, and why are they so important? NPU 101: What is it? NPU stands for Neural Processing Unit. It’s a specialized processor designed to handle AI and machine learning tasks much more efficiently than traditional CPUs or GPUs. Think of NPUs as AI accelerators—they boost the performance of AI features without draining your battery or heating up your device. How Do NPUs Work? AI tasks, like image recognition or language processing, rely on something called neural networks. NPUs are optimized to process these networks faster and with less power consumption. While GPUs are great at AI tasks, they require lots of energy. NPUs, on the other hand, hit the sweet spot by offering a balance between performance and efficiency. NPU vs. CPU vs. GPU: The Breakdown CPU (Central Processing Unit): Your PC’s brain, handling general tasks. GPU (Graphics Processing Unit): Focused on handling complex graphics and gaming, but also useful for AI work. NPU (Neural Processing Unit): The AI specialist, running tasks like voice assistants, image processing, and more—without hogging system resources or battery life. Why Do NPUs Matter? If you’ve ever run AI tools like ChatGPT or Stable Diffusion on your PC, you might have noticed they’re power-hungry and slow down your machine. NPUs can run these tasks faster and more efficiently. Plus, they free up your CPU and GPU for other tasks, meaning your entire system runs smoother. NPUs in Action: What Can They Do? Windows PCs with NPUs can run AI-powered features, like Microsoft’s Windows Studio Effects (think background blurring on video calls or automatic eye contact correction). In the future, more apps will take advantage of NPUs, offering smoother AI-enhanced experiences—from audio and video editing to gaming. What’s the Catch? Right now, not every PC has an NPU. But brands like Intel, AMD, and Qualcomm are starting to pack them into their newest chips. So, if you're shopping for a new laptop, it’s something to keep an eye on. Should You Get a PC with an NPU Now? It’s tempting to future-proof your device, but NPUs are still in their early stages. For most people, they’re a nice-to-have, not a need-to-have—yet. Many AI tools still run on the cloud, meaning your current hardware will work fine. But if you're looking to stay ahead of the curve, picking up a PC with an NPU could be a smart move for AI-heavy tasks. The AI revolution is just getting started, and NPUs will be at the heart of it. Whether you’re a gamer, creator, or just want a smoother PC experience, NPUs are set to make everything faster, smarter, and more efficient. Note: i used chatgpt to rewrite and make it better due to lack of my English skills submitted by /u/These-Salary-9215 [link] [comments]
California is introducing new laws to combat AI-generated deepfakes and manipulated political content ahead of elections; One of them prohibits sharing deceptive election materials 120 days before elections; The legislative push comes after an AI-altered video of Vice President Kamala Harris went viral on X. Source: https://www.bitdegree.org/crypto/news/new-california-laws-ban-political-deepfakes-during-elections?utm_source=reddit&utm_medium=social&utm_campaign=r-california-laws-ban-political-deepfake submitted by /u/webbs3 [link] [comments]
NVIDIA is offering many free courses at its Deep Learning Institute. Some of my favourites Building RAG Agents with LLMs: This course will guide you through the practical deployment of an RAG agent system (how to connect external files like PDF to LLM). Generative AI Explained: In this no-code course, explore the concepts and applications of Generative AI and the challenges and opportunities present. Great for GenAI beginners! An Even Easier Introduction to CUDA: The course focuses on utilizing NVIDIA GPUs to launch massively parallel CUDA kernels, enabling efficient processing of large datasets. Building A Brain in 10 Minutes: Explains and explores the biological inspiration for early neural networks. Good for Deep Learning beginners. I tried a couple of them and they are pretty good, especially the coding exercises for the RAG framework (how to connect external files to an LLM). It's worth giving a try !! submitted by /u/mehul_gupta1997 [link] [comments]
Our sub needs the communites help. We need a logo for the sub! Im sure yall come come up with something better than the brain thing we currently have. The logo with the most upvotes will be used as the logo for r/ArtificialInteligence Reddit, do your thing! submitted by /u/mcr55 [link] [comments]
Welcome to r/ArtificialIntelligence! Our goal is to provide an open and respectful forum for all things considered Artificial Intelligence - this includes Facilitate philosophical and ethical discussions about AI Serve as a starting point for understanding and learning about AI topics Offer technical paper presentations and discussions Present quality AI/ML applications Provide training and learning resources Direct users to more specific information and subreddits List AI/ML applications, their uses, costs, and access information Additional AI-related content. ...and more The moderation team for this sub is going through a reshuffle which will result in some changes to the sub. However, there is no need to worry as these changes will primarily focus on improving organization, resources, and pre-prepared content. To ensure that the community is fully informed and able to provide feedback, multiple opportunities will be given for feedback on the changes. The first round of feedback gathering is through this thread as a "Request-For-Comments" (RFC), which is a standard method of gathering feedback. There will be multiple rounds of the RFC process as the changes are prepared and implemented. Rules on posting new applications / self-promotion / AI generated content Posts that are applications consisting of a ChatGPT-api "skin" or similar will be prevented or confined to specific stickied threads. AI generated content specific to the arts (writing, visual arts, music) require flair, or will be confined to specific stickied threads. Blog links should consist of high-quality content. Posts that link to blogs that are purely promotional will be removed. Posts with just links will be prohibited unless there is a certain word count of detail included. Some effort must be put in. Should we prevent posts that are written by AI? There exist models that could be used in a Mod-bot, but this is a question we need feedback on. Use of flair in order to organize posts. Note that new flair has been added already, we are open to more suggestions. What should the sub policy on NSFW applications and techniques in regards to AI/ML application? We would like to include the community with ideas for mod-bots. While some standard bots will be used for basic maintenance, but what interesting things can the community come up with for AI/ML bot functions? Cultivating beginner, intermediate, and advanced resources to assist people in finding information, training, models, technical data, etc. that they are looking for Starting substack/podcast to interview people throughout the AI/ML spectrum. This could include philosophers and thinkers, programmers, scientists, business people, even those with antithetical views on AI If you would like to create banners that represent the sub, please do so with the appropriate size. Any method of creation is acceptable. It should go without saying that everyone should be treated with respect. I personally feel that we all know this and it doesn't need to be hammered into people’s heads. Be nice. Thank you for your patience and assistance! submitted by /u/FHIR_HL7_Integrator [link] [comments]
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
submitted by /u/Outrageous-Catch4731 [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
In contrast, real experts know when to sound confident, and when to let others know they’re at the boundaries of their knowledge. Experts know, and can describe, the boundaries of what they know.
Building large language models that can accurately decide when to be confident and when not to will reduce their risk of misinformation and build trust.
Go deeper in The Batch: https://www.deeplearning.ai/the-batch/issue-174/