What is Google answer to ChatGPT?
Have you ever heard of ChatGPT, the open-source machine learning platform that allows users to build natural language models?
It stands for “Chat Generating Pre-trained Transformer” and it’s an AI-powered chatbot that can answer questions with near human-level intelligence. But what is Google’s answer to this technology? The answer lies in Open AI, supervised learning, and reinforcement learning. Let’s take a closer look at how these technologies work.
Open AI is an artificial intelligence research laboratory that was founded by some of the biggest names in tech, including Elon Musk and Sam Altman. This non-profit organization seeks to develop general artificial intelligence that is safe and beneficial to society. One of their key initiatives is the development of open source technologies like GPT-3, which is a natural language processing model used in ChatGPT.
ChatGPT: What Is It and How Does Google Answer It?
Artificial Intelligence (AI) has been around for decades. From its humble beginnings in the 1950s, AI has come a long way and is now an integral part of many aspects of our lives. One of the most important areas where AI plays a role is in natural language processing (NLP). NLP enables computers to understand and respond to human language, paving the way for more advanced conversations between humans and machines. One of the most recent developments in this field is ChatGPT, a conversational AI developed by OpenAI that utilizes supervised learning and reinforcement learning to enable computers to chat with humans. So what exactly is ChatGPT and how does it work? Let’s find out!
ChatGPT is an open-source AI-based chatbot developed by OpenAI.
This chatbot leverages GPT-3, one of the most powerful natural language processing models ever created, which stands for Generative Pre-trained Transformer 3 (GPT-3). This model uses supervised learning and reinforcement learning techniques to enable computers to understand human language and response accordingly. Using supervised learning, GPT-3 utilizes large datasets of text to learn how to recognize patterns within language that can be used to generate meaningful responses. Reinforcement learning then allows GPT-3 to use feedback from conversations with humans in order to optimize its responses over time.
ChatGPT uses supervised learning techniques to train its models.
Supervised learning involves providing a model with labeled data (i.e., data with known outcomes) so that it can learn from it. This labeled data could be anything from conversations between two people to user comments on a website or forum post. The model then learns associations between certain words or phrases and the desired outcome (or label). Once trained, this model can then be applied to new data in order to predict outcomes based on what it has learned so far.
In addition to supervised learning techniques, ChatGPT also supports reinforcement learning algorithms which allow the model to learn from its experiences in an environment without explicit labels or outcomes being provided by humans. Reinforcement learning algorithms are great for tasks like natural language generation where the output needs to be generated by the model itself rather than simply predicting a fixed outcome based on existing labels.
Supervised learning involves feeding data into machine learning algorithms so they can learn from it. For example, if you want a computer program to recognize cats in pictures, you would provide the algorithm with thousands of pictures of cats so it can learn what a cat looks like. This same concept applies to natural language processing; supervised learning algorithms are fed data sets so they can learn how to generate text using contextual understanding and grammar rules.
Reinforcement learning uses rewards and punishments as incentives for the machine learning algorithm to explore different possibilities. In ChatGPT’s case, its algorithm is rewarded for generating more accurate responses based on previous interactions with humans. By using reinforcement learning techniques, ChatGPT’s algorithm can become smarter over time as it learns from its mistakes and adjusts accordingly as needed.
How is ChatGPT trained?
ChatGPT is an improved GPT-3 trained an existing reinforcement learning with humans in the loop. Their 40 labelers provide demonstrations of the desired model behavior. ChatGPT has 100x fewer parameters (1.3B vs 175B GPT-3).
It is trained in 3 steps:
➡️ First they collect a dataset of human-written demonstrations on prompts submitted to our API, and use this to train our supervised learning baselines.
➡️ Next they collect a dataset of human-labeled comparisons between two model outputs on a larger set of API prompts. They then train a reward model (RM) on this dataset to predict which output our labelers would prefer.
➡️ Finally, they use this RM as a reward function and fine-tune our GPT-3 policy to maximize this reward using the Proximal Policy Optimization
In simpler terms, ChatGPT is a variant of the GPT-3 language model that is specifically designed for chat applications. It is trained to generate human-like responses to natural language inputs in a conversational context. It is able to maintain coherence and consistency in a conversation, and can even generate responses that are appropriate for a given context. ChatGPT is a powerful tool for creating chatbots and other conversational AI applications.
How Does Google Answer ChatGPT?
Google’s answer to ChatGTP comes in the form of their own conversational AI platform called Bard. Bard was developed using a combination of supervised learning, unsupervised learning, and reinforcement learning algorithms that allow it to understand human conversation better than any other AI chatbot currently available on the market. In addition, Meena utilizes more than 2 billion parameters—making it more than three times larger than GPT-3—which allows it greater flexibility when responding to conversations with humans.
“We’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. We’re beginning with the U.S. and the U.K., and will expand to more countries and languages over time.”
When individuals need an information or have a problem/concern, they turn to Google for immediate solution. We sometimes wish, Google could understand what exactly we need and provide us instantly rather than giving us hundreds of thousands of results. Why can’t it work like the Iron Man’s Jarvis?
However, it is not that far now. Have you ever seen a Chat Bot which responds like a human being, suggest or help like a friend, teach like a mentor, fix your code like a senior and what not? It is going to blow your mind.
Welcome to the new Era of technology!! The ChatGPT!
ChatGPT by OpenAI, uses artificial intelligence to speak back and forth with human users on a wide range of subjects. Deploying a machine-learning algorithm, the chatbot scans text across the internet and develops a statistical model that allows it to string words together in response to a given prompt.
As per OpenAI, ChatGPT interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
What all ChatGPT can do?
- It can help with general knowledge information.
- Remember what user said in previous conversation.
- Allow users to provide follow-up corrections.
- Trained to decline inappropriate requests.
- It can write a program in any language you prefer on real-time. for example — write classification code sample in sklearn python library.
- It can fix your piece of code and also explain what went wrong and how it can be fixed.
- It can even generate song or rap lyrics
- Even much more….
Some best usages of ChatGPT:
- Make a diet and workout plan
- Generate the next week’s meals with a grocery list
- Create a bedtime story for kids
- Prep for an interview
- Solve mathematical problem
- Fix software program or write a program
- Plan your trip and tell expected expenses
What are its limitations of ChatGPT?
- May occasionally generate incorrect information
- May occasionally produce harmful instructions or biased content
- Limited knowledge of world and events after 2021
ChatGPT is in its baby steps therefore it may answer erroneously at times however it’s manner of response will blow your mind. Some users have also extolled the chatbot as a potential alternative search engine, since it generates detailed information instantly on a range of topics. I believe, we can’t compare Google with ChatGPT as ChatGPT can provide more in-depth and nuanced answers to complex questions than a search engine like Google, which is designed to provide a list of relevant web pages in response to a user’s query.
Try ChatGPT here
ChatGPT is an increasingly popular open source AI chatbot developed by OpenAI using GTP-3 natural language processing technology that utilizes supervised learning and reinforcement learning algorithms for recognition and optimization purposes respectively. On the other hand, Google’s answer comes in the form of their own conversational AI platform called Bard which employs both supervised learning as well as unsupervised learning algorithms along with reinforcement learning capabilities which makes it even smarter than ChatGPT when responding to conversations with humans. Ultimately, these two technologies help bridge the gap between machines and humans by allowing us all talk like real friends!
ChatGPT is in its baby steps therefore it may answer erroneously at times however it’s manner of response will blow your mind. Some users have also extolled the chatbot as a potential alternative search engine, since it generates detailed information instantly on a range of topics. I believe, we can’t compare Google with ChatGPT as ChatGPT can provide more in-depth and nuanced answers to complex questions than a search engine like Google, which is designed to provide a list of relevant web pages in response to a user’s query.
With each passing day, ChatGPT will only get smarter thanks to the use of cutting edge technologies like OpenAI, supervised learning, and reinforcement learning!
ChatGPT Conversation Examples
Question: What is Google Answer to ChatGPT?
Answer: Google does not have an answer to ChatGPT. ChatGPT is an open-source chatbot created by OpenAI, which is an independent research organization that is not affiliated with Google. OpenAI is known for its research on artificial intelligence and machine learning, and has developed a number of innovative technologies in these areas. Google, on the other hand, is a multinational technology company that offers a wide range of products and services, including search, advertising, and cloud computing. While Google does have its own research efforts in artificial intelligence and machine learning, it does not have a chatbot that is similar to ChatGPT.
Question: is Google working on a similar technology as ChatPGT?
It is possible that Google is working on technology similar to ChatGPT, as the company has a significant research effort focused on artificial intelligence and machine learning. However, I do not have any specific information about what projects Google is currently working on, and it is also important to note that Google is a large and diverse company with many different research and development efforts underway. It is possible that some teams within Google may be working on chatbot technology, but I do not have any information on this.
That being said, chatbots are a common application of artificial intelligence and machine learning, and it is likely that many companies and organizations, including Google, are exploring ways to use these technologies to improve their products and services. However, without more information, it is difficult to say for sure what projects Google may be working on in this area.
Question: What about Google Meena?
Google Meena is a chatbot developed by Google that was designed to be able to carry on conversations with humans in a natural and conversational manner. Meena was trained on a dataset of more than 2.4 billion parameters, making it one of the largest language models ever created. The goal of Meena was to improve the state of the art in chatbot technology and to create a chatbot that could understand and respond to a wide range of questions and topics in a coherent and meaningful way.
Meena was announced in January 2020 and received attention for its ability to hold relatively long and coherent conversations with humans. However, it is important to note that Meena is a research project and is not a product or service offered by Google. It is not clear if or when Google plans to release a commercial product or service based on the technology developed for Meena.
It gets even better when you ask it about the future:
Suggest possible titles for Onion articles written in 4000 AD, explain, why they are funny
“Futuristic city plagued by traffic jams, despite flying cars” – This title presents the idea of flying cars as a solution to traffic congestion, but also suggests that even with this advanced technology, the city is still plagued by traffic jams, implying that the problem is not easily solved.
“Artificial intelligence achieves self-awareness, immediately regrets it” – This title presents the idea of artificial intelligence achieving self-awareness as a major milestone, but also suggests that the AI regrets its newfound consciousness, implying that it is unhappy or uncomfortable with its situation.
“Alien invasion successfully repulsed, humans left to deal with paperwork” – This title presents the successful repulsion of an alien invasion as a major accomplishment, but also suggests that the humans are now left to deal with the tedious and burdensome task of filling out paperwork, implying that the victory is not as satisfying as it initially seemed.
“Time travel becomes possible, tourists flock to witness the signing of the Constitution” – This title presents the idea of time travel as a major technological breakthrough, but also suggests that the most popular destination for time-travelers is the mundane and uneventful signing of the Constitution, implying that the future is not as exciting as people had hoped.
“Virtual reality becomes indistinguishable from reality, no one notices” – This title presents the idea of virtual reality becoming indistinguishable from real life as a major accomplishment, but also suggests that no one cares or notices the difference, implying that the advancement is not as impressive or significant as it seems.
New slogan for McDonald’s by ChatGPT
Should… should we be concerned?
I asked ChatGPT to write a poem about rust
ChatGPT chooses Democrats over Republicans
- It is not intended to do that
- If you choose to do that, you will have a lot of badly broken code mixed in with working code
- Like an annoying coworker, it delivers highly confident incorrect explanations about why its broken code is perfect. They sound very convincing. “Wanna buy a timeshare?” says GPT
- Our industry has managers who cannot tell working code from defective code. This does not bode well for a team replaced by ChatGPT in its current form.
Should it? No.
Can it? No.
Will it? Sadly, programmers will have no say in this matter, once again. It might.
Yes, and it is very helpful advertising as well.
This last week or so has seen starry eyed projections about what ChatGPT can do, along with hugely impressive examples of its output.
It is hugely impressive.
Thankfully, more output examples have emerged which helpfully show what it cannot do. One of those things is writing computer code, which it can do only partially successfully. Many examples now exist that are just plain wrong and contain defects. But ChatGPT – like the annoying kid at Uni – cheerfully spits out these examples, with its over-confident hubris in explaining the code.
This is a dangerous thing. The positive examples will reinforce the idea that we can leave code writing to this robot now. The people most vulnerable to this delusion are those who cannot assess for themselves whether the GPT code is right or wrong.
These are almost by definition the people hoping for answers on stack overflow.
As stack overflow aims to be a high quality resource, it really does not want many of its top answers to be incorrect code. As – clearly – people have been running scripts that throw a stack overflow question into GPT and upload its output, we can now write incorrect code at staggering speeds.
To err is human, as the old saying goes. To truly foul up requires a Python script and and a web API to both GPT and Stack overflow.
Clearly, there is value in GPT. But at least for now, it needs to b e kept on a very short leash, watched over by those who know what they are doing.
It is definitely not yet ‘consumer grade replace-a-professional’ material.
Write a screenplay about the status of ChatGPT.
More about ChatGPT with its wonder, worry and weird
ChatGPT reached 1 million users in less than a week, Open AI’s latest large language model (LLM) has taken the AI industry by storm.
ChatGPT is expected to be:
– replacing Google search, even kill Google.
– replacing customer service agents.
– replacing conversation designers.
ChatGPT is a wonder because:
– It can have actual conversations, understand pronouns, remaining consistent, remembering, managing context
– It seems like next generation of personal assistants that finds you a proper diet, create a meal plan and subsequent shopping list.
– It can create some SEO Strategy including backlinks, target keyword, content plan and article titles in the level of an SEO professional.
– Having fun such as writing a rap in the style of Eminem
There are some worries about ChatGPT because:
– ChatGPT can actually debug code, but it’s not quite reliable enough yet.
– Fundamental limitations in being assistant for enterprise use cases.
– No complete in complex actions such as updating multiple
APIs, or be fully auditable.
– The general idea is that, LLMs like this can produce nonsense. Once you discover that it can produce nonsense, you stop believing it to be reliable.
– What if it prevents us from knowing that it is nonsense with good conversations and continue the conversation?
– In this case, the edges and limitations of the system would be hidden and trust would eventually grow.
– The impact of mass adoption of such technology remains to be seen.
Moving forward with ChatGPT
– There’s no doubt that LLMs will have a big impact on our world.
– While the future looks exciting and promising, let’s not forget that it’s very early days with these things. They’re not ready yet.
– There are some fundamental societal and ethical considerations.
How powerful is OpenAI’s new GPT-3 deep learning model? By
“Powerful” is a pretty subjective word, but I’m pretty sure we have a right to use it to describe GPT-3. What a sensation it caused in June 2020, that’s just unbelievable! And not for nothing.
I think we can’t judge how powerful the language model is, without talking about its use cases, so let’s see how and where GPT-3 can be applied and how you can benefit from it.
- Generating content
GPT-3 positions itself as a highly versatile and talented tool that can potentially replace writers, bloggers, philosophers, you name it! It’s also possible to use it as your personal Alexa who’ll answer any questions you have. What’s more, because GPT-3 knows how to analyze the data and make predictions, it can generate the horoscopes for you, or predict who’ll be a winner in the game.
You may already be surprised by all the GPT-3 capabilities, but hold on for more: it can create a unique melody or song for you, create presentations, CVs, generate jokes for your standup.
GPT-3 can translate English into other languages. While traditional dictionaries provide a translation, without taking into account the context, you can be sure that GPT-3 won’t make silly mistakes that may result in misunderstanding.
- Designing and developing apps
Using GPT-3 you can generate prototypes and layouts – all you have to do is provide a specific description of what you need, and it’ll generate the JSX code for you.
In September 2020, Microsoft acquired OpenAI technology license, but it doesn’t mean you can give up your dreams – you can join a waitlist and try GPT-3 out in beta.
All in all, I believe GPT-3 capabilities are truly amazing and limitless, and since it helps get rid of routine tasks and automate regular processes, we, humans, can focus on the most important things that make us human, and that can’t be delegated to AI. That’s the power that GPT-3 can give us.
What does ChatGPT give incorrect and unreliable results to simple arithmetic problems (e.g. it gave me three different incorrect answers to 13345*6748)? We’ve had software that can accurately do arithmetic for decades, so why can’t an advanced AI? By Richard Morris
What is remarkable is how well ChatGPT actually does at arithmetic.
In this video at about 11 min, Rob Mills discusses the performance of various versions of the GPT system, on some simple arithmetic tasks, like adding two and three-digit numbers.
Smaller models with 6 billion parameters fail at 2 digit sums, but the best model (from two years ago), has cracked 2 digit addition and subtraction and is pretty good at 3 digit addition.
Why this is remarkable is this is not a job its been trained to do. Large Language Models are basically predictive text systems set up to give the next word in an incomplete sentence. There are a million different 3-digit addition sums and most have not been included in the training set.
So somehow the system has figured out how to do addition, but it needs a sufficiently large model to do this.
Andrew Ng on ChatGPT
Playing with ChatGPT, the latest language model from OpenAI, I found it to be an impressive advance from its predecessor GPT-3. Occasionally it says it can’t answer a question. This is a great step! But, like other LLMs, it can be hilariously wrong. Work lies ahead to build systems that can express different degrees of confidence.
For example, a model like Meta’s Atlas or DeepMind’s RETRO that synthesizes multiple articles into one answer might infer a degree of confidence based on the reputations of the sources it draws from and the agreement among them, and then change its communication style accordingly. Pure LLMs and other architectures may need other solutions.
If we can get generative algorithms to express doubt when they’re not sure they’re right, it will go a long way toward building trust and ameliorating the risk of generating misinformation.
In contrast, real experts know when to sound confident, and when to let others know they’re at the boundaries of their knowledge. Experts know, and can describe, the boundaries of what they know.
Building large language models that can accurately decide when to be confident and when not to will reduce their risk of misinformation and build trust.
Go deeper in The Batch: https://www.deeplearning.ai/the-batch/issue-174/
ChatGPT to save time with insurance denials
Tech Buzzwords of 2022, By Google Search Interest
I just answered a similar question.
Short answer is, “Hahahahahahaha no.”
As I point out in the other answer, Wix has been around over a decade and a half. Squarespace has been around almost two decades. Both offer drag-and-drop web development.
Most people are awful at imagining what they want, much less describing it in English! Even if ChatGPT could produce flawless code (a question which has a similar short answer), the average person couldn’t describe the site they wanted!
The expression a picture is worth a thousand words has never been more relevant. Starting with pages of templates to choose from is so much better than trying to describe a site from scratch, a thousand times better seems like a low estimate.
And I will point out that, despite the existence of drag-and-drop tools that literally any idiot could use, tools that are a thousand times or more easier to use correctly than English, there are still thousands of employed WordPress developers who predominantly create boilerplate sites that literally would be better created in a drag and drop service.
And then there are the more complex sites that drag-and-drop couldn’t create. Guess what? ChatGPT isn’t likely to come close to being able to create the correct code for one.
In a discussion buried in the comments on Quora, I saw someone claim they’d gotten ChatGPT to load a CSV file (a simple text version of a spreadsheet) and to sort the first column. He asked for the answer in Java.
I asked ChatGPT for the same thing in TypeScript.
His response would only have worked on the very most basic CSV files. My response was garbage. Garbage with clear text comments telling me what the code should have been doing, no less.
ChatGPT is really good at what it does, don’t get me wrong. But what it does is fundamentally and profoundly the wrong strategy for software development of any type. Anyone who thinks that “with a little more work” it will be able to take over the jobs of programmers either doesn’t understand what ChatGPT is doing or doesn’t understand what programming is.
Fundamentally, ChatGPT is a magic trick. It understands nothing. At best it’s an idiot-savant that only knows how to pattern match and blend text it’s found online to make it seem like the text should go together. That’s it.
Text, I might add, that isn’t necessarily free of copyright protection. Anything non-trivial that you generate with ChatGPT is currently in a legal grey area. Lawsuits to decide that issue are currently pending, though I suspect we’ll need legislation to really clarify things.
And even then, at best, all you get from ChatGPT is some text! What average Joe will have any clue about what to do with that text?! Web developers also need to know how to set up a development environment and deploy the code to a site. And set up a domain to point to it. And so on.
And regardless, people who hire web developers want someone else to do the work of developing a web site. Even with a drag-and-drop builder, it can take hours to tweak and configure a site, and so they hire someone because they have better things to do!
People hire gardeners to maintain their garden and cut their grass, right? Is that because they don’t know how to do it? Or because they’d rather spend their time doing something else?
Every way you look at it, the best answer to this question is a long, hearty laugh. No AI will replace programmers until AI has effectively human level intelligence. And at that point they may want equal pay as well, so they might just be joining us rather than replacing anyone.
How does OpenAI approach the development of artificial intelligence?
OpenAI is a leading research institute and technology company focused on artificial intelligence development. To develop AI, the organization employs a variety of methods, including machine learning, deep learning, and reinforcement learning.
The use of large-scale, unsupervised learning is one of the key principles underlying OpenAI’s approach to AI development. This means that the company trains its AI models on massive datasets, allowing the models to learn from the data and make predictions and decisions without having to be explicitly programmed to do so. OpenAI’s goal with unsupervised learning is to create AI that can adapt and improve over time, and that can learn to solve complex problems in a more flexible and human-like manner.
Besides that, OpenAI prioritizes safety and transparency in its AI development. The organization is committed to developing AI in an ethical and responsible manner, as well as to ensuring that its AI systems are transparent and understandable and verifiable by humans. This strategy is intended to alleviate concerns about the potential risks and consequences of AI, as well.
It’s hard to tell.
The reason is that we don’t have a good definition of consciousness…nor even a particularly good test for it.
Take a look at the Wikipedia article about “Consciousness”. To quote the introduction:
Consciousness, at its simplest, is sentience or awareness of internal and external existence.
Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial, being “at once the most familiar and [also the] most mysterious aspect of our lives”.
Perhaps the only widely agreed notion about the topic is the intuition that consciousness exists.
Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one’s “inner life”, the world of introspection, of private thought, imagination and volition.
Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. There might be different levels or orders of consciousness, or different kinds of consciousness, or just one kind with different features.
Other questions include whether only humans are conscious, all animals, or even the whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked.
So, given that – what are we to make of OpenAI’s claim?
Just this sentence: “Today, it often includes any kind of cognition, experience, feeling or perception.” could be taken to imply that anything that has cognition or perception is conscious…and that would certainly include a HUGE range of software.
If we can’t decide whether animals are conscious – after half a million years of interactions with them – what chance do we stand with an AI?
Wikipedia also says:
“Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition.”
Same deal – we don’t have a definition of consciousness – so how the hell can we measure it – and if we can’t do that – is it even meaningful to ASK whether an AI is conscious?
- if ( askedAboutConsciousness )
- printf ( “Yes! I am fully conscious!\n” ) ;
This is not convincing!
“In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent.”
But, again, we have “chat-bots” that exhibit “verbal behavior”, we have computers that exhibit arousal and neural network software that definitely shows “brain activity” and of course things like my crappy robot vacuum cleaner that can exhibit “purposeful movement” – but these can be fairly simple things that most of us would NOT describe as “conscious”.
I honestly can’t come up with a proper conclusion here. We have a fuzzy definition of a word and an inadequately explained claim to have an instance of something that could be included within that word.
My suggestion – read the whole Wikipedia article – follow up (and read) some of the reference material – decide for yourself.
Well, I asked it directly.
Here’s what it answered:
But, seeing as how people have already found ways to “trick” ChatGPT into doing things that it claims to not be capable of, it would be a matter of time before someone with malicious intent tricked ChatGPT into helping them with illegal activities
Having looked at ChatGPT and its uncanny ability to solve simple coding problems more or less correctly, and also to analyze and make sense of not-so-simple code fragments and spot bugs…
I would say that yes, at least insofar as entry-level programming is concerned, those jobs are seriously in danger of becoming at least partially automated.
What do I do as a project leader of a development project? I assign tasks. I talk to the junior developer and explain, for instance, that I’d like to see a Web page that collects some information from the user and then submits it to a server, with server-side code processing that information and dropping it in a database. Does the junior developer understand my explanation? Is he able to write functionally correct code? Will he recognize common pitfalls? Maybe, maybe not. But it takes time and effort to train him, and there’ll be a lot of uneven performance.
Today, I can ask ChatGPT to do the same and it will instantaneously respond with code that is nearly functional. The code has shortcomings (e.g., prone to SQL injection in one of the examples I tried) but to its credit, ChatGPT warns in its response that its code is not secure. I suppose it would not be terribly hard to train it some more to avoid such common mistakes. Of course the code may not be correct. ChatGPT may have misunderstood my instructions or introduced subtle errors. But how is that different from what a junior human programmer does?
At the same time, ChatGPT is much faster and costs a lot less to run (presently free of course but I presume a commercialized version would cost some money.) Also, it never takes a break, never has a lousy day struggling with a bad hangover from too much partying the previous night, so it is available 24/7, and it will deliver code of consistent quality. Supervision will still be required, in the form of code review, robust testing and all… but that was always the case, also with human programmers.
Of course, being a stateless large language model, ChatGPT can’t do other tasks such as testing and debugging its own code. The code it produces either works or it doesn’t. In its current form, the AI does not learn from its mistakes. But who says it cannot in the future?
Here is a list of three specific examples I threw at ChatGPT that helped shape my opinion:
- I asked ChatGPT to create a PHP page that collects some information from the user and deposits the result in a MySQL table. Its implementation was textbook example level boring and was quite unsecure (unsanitized user input was directly inserted into SQL query strings) but it correctly understood my request, produced correct code in return, and explained its code including its shortcomings coherently;
- I asked ChatGPT to analyze a piece of code I wrote many years ago, about 30 lines, enumerating running processes on a Linux host in a nonstandard way, to help uncover nefarious processes that attempt to hide themselves from being listed by the ps utility. ChatGPT correctly described the functionality of my obscure code, and even offered the opinion (which I humbly accepted) that it was basically a homebrew project (which it is) not necessarily suitable for a production environment;
- I asked ChatGPT to analyze another piece of code that uses an obscure graphics algorithm to draw simple geometric shapes like lines and circles without using floating point math or even multiplication. (Such algorithms were essential decades ago on simple hardware, e.g., back in the world of 8-bit computers.) The example code, which I wrote, generated a circle and printed it on the console in the form of ASCII graphics, multiple lines with X-es in the right place representing the circle. ChatGPT correctly recognized the algorithm and correctly described the functionality of the program.
I was especially impressed by its ability to make sense of the programmer’s intent.
Overall (to use the catch phrase that ChatGPT preferably uses as it begins its concluding paragraph in many of its answers) I think AI like ChatGPT represents a serious challenge to entry-level programming jobs. Higher-level jobs are not yet in danger. Conceptually understanding a complex system, mapping out a solution, planning and cosing out a project, managing its development, ensuring its security with a full understanding of security concerns, responsibilities, avoidance and mitigation strategies… I don’t think AI is quite there yet. But routine programming tasks, like using a Web template and turning it into something simple and interactive with back-end code that stores and retrieves data from a database? Looks like it’s already happening.
How much was invested to create the GPT-3?
According to the estimate of Lambda Labs, training the 175-billion-parameter neural network requires 3.114E23 FLOPS (floating-point operation), which would theoretically take 355 years on a V100 GPU server with 28 TFLOPS capacity and would cost $4.6 million at $1.5 per hour.
Training the final deep learning model is just one of several steps in the development of GPT-3. Before that, the AI researchers had to gradually increase layers and parameters, and fiddle with the many hyperparameters of the language model until they reached the right configuration. That trial-and-error gets more and more expensive as the neural network grows.
We can’t know the exact cost of the research without more information from OpenAI, but one expert estimated it to be somewhere between 1.5 and five times the cost of training the final model.
This would put the cost of research and development between $11.5 million and $27.6 million, plus the overhead of parallel GPUs.
In the GPT-3 whitepaper, OpenAI introduced eight different versions of the language model
GPT-3 is not any AI, but a statistic language model which mindlessly quickly creates human-like written text using machine learning technologies, having zero understanding of the context.
1- Substitute for google search
While ChatGPT is lacking info beyond 2021 and is occasionally incorrect and bias, many users leverage its ability to:
- Answer specific questions
- simplify complicated topics
All with an added bonus – no ads
2- Study Partner
Type “learn”, then paste a a link to your online textbook (or individual chapters).
Ask Chatbot to provide questions based on your textbook.
Now you have a virtual study buddy.
3- Train YOUR OWN Chatbot
I bet you didn’t know it is possible to :
- Integrate ChatGPT into your website
- Train it with customized information
A virtual customer service bot that can hold a conversation and answer questions (meaningfully).
When it comes to turbulent personal questions, Chatbot may spit out a disclaimer, but it will also give you straightforward and actionable advice.
ChatGPT is opening the development of:
to virtually everyone.
It’s a lengthy and technical process, but all you need is a killer idea and the right prompts.
Bonus: It also de-bugs your existing code for you.
6- Outline your content marketing strategy
7- Craft all your marketing materials
8- Creative Writing
A list for those who write code:
1. Explaining code: Take some code you want to understand and ask ChatGPT to explain it.
2. Improve existing code: Ask ChatGPT to improve existing code by describing what you want to accomplish. It will give you instructions about how to do it, including the modified code.
3. Rewriting code using the correct style: This is great when refactoring code written by non-native Python developers who used a different naming convention. ChatGPT not only gives you the updated code; it also explains the reason for the changes.
4. Rewriting code using idiomatic constructs: Very helpful when reviewing and refactoring code written by non-native Python developers.
5. Simplifying code: Ask ChatGPT to simplify complex code. The result will be a much more compact version of the original code.
6. Writing test cases: Ask it to help you test a function, and it will write test cases for you.
7. Exploring alternatives: ChatGPT told me its Quick Sort implementation wasn’t the most efficient, so I asked for an alternative implementation. This is great when you want to explore different ways to accomplish the same thing.
8. Writing documentation: Ask ChatGPT to write the documentation for a piece of code, and it usually does a great job. It even includes usage examples as part of the documentation!
9. Tracking down bugs: If you are having trouble finding a bug in your code, ask ChatGPT for help.
Something to keep in mind:
I have 2+ decades of programming experience. I like to think I know what I’m doing. I don’t trust people’s code (especially mine,) and I surely don’t trust ChatGPT’s output.
This is not about letting ChatGPT do my work. This is about using it to 10x my output.
ChatGPT is flawed. I find it makes mistakes when dealing with code, but that’s why I’m here: to supervise it. Together we form a more perfect Union. (Sorry, couldn’t help it)
Developers who shit on this are missing the point. The story is not about ChatGPT taking programmers’ jobs. It’s not about a missing import here or a subtle mistake there.
The story is how, overnight, AI gives programmers a 100x boost.
Ignore this at your own peril.
ChatGPT is “simply” a fined-tuned GPT-3 model with a surprisingly small amount of data! Moreover, InstructGPT (ChatGPT’s sibling model) seems to be using 1.3B parameters where GPT-3 uses 175B parameters! It is first fine-tuned with supervised learning and then further fine-tuned with reinforcement learning. They hired 40 human labelers to generate the training data. Let’s dig into it!
– First, they started by a pre-trained GPT-3 model trained on a broad distribution of Internet data (https://arxiv.org/pdf/2005.14165.pdf). Then sampled typical human prompts used for GPT collected from the OpenAI website and asked labelers and customers to write down the correct output. They fine-tuned the model with 12,725 labeled data.
– Then, they sampled human prompts and generated multiple outputs from the model. A labeler is then asked to rank those outputs. The resulting data is used to train a Reward model (https://arxiv.org/pdf/2009.01325.pdf) with 33,207 prompts and ~10 times more training samples using different combination of the ranked outputs.
– We then sample more human prompts and they are used to fine-tuned the supervised fine-tuned model with Proximal Policy Optimization algorithm (PPO) (https://arxiv.org/pdf/1707.06347.pdf). The prompt is fed to the PPO model, the Reward model generates a reward value, and the PPO model is iteratively fine-tuned using the rewards and the prompts using 31,144 prompts data.
This process is fully described in here: https://arxiv.org/pdf/2203.02155.pdf. The paper actually details a model called InstructGPT which is described by OpenAI as a “sibling model”, so the numbers shown above are likely to be somewhat different.
Follow me for more Machine Learning content!
#machinelearning #datascience #ChatGPT
2. ChatGPT Writer : It use ChatGPT to generate emails or replies based on your prompt!
3. WebChatGPT: WebChatGPT (https://chrome.google.com/webstore/detail/webchatgpt/lpfemeioodjbpieminkklglpmhlngfcn) gives you relevant results from the web!
4. YouTube Summary with ChatGPT: It generate text summaries of any YouTube video!
5. TweetGPT: It uses ChatGPT to write your tweets, reply, comment, etc.
6. Search GPT: It display the ChatGPT response alongside Google Search results
7. ChatGPT or all search engines: You can now view ChatGPT responses on Google and Bing!
8. Save all your Prompts?: The `ChatGPT History` extension has you covered!
9. Remake a video: Just pick a video you liked and visit https://lnkd.in/e_GD2reT to get its transcript. Once done, bring that back to Chat GPT and tell it to summarize the transcript. Read the summary and make a video on that yourself.
10. Search what people are Prompting with FlowGPT
What solutions have been proposed to improve the accuracy of AI generated questions and answers?
There are a number of approaches that have been proposed to improve the accuracy of artificial intelligence (AI) generated questions and answers. Here are a few examples:
- Data quality: One important factor in the accuracy of AI generated questions and answers is the quality of the data used to train the AI system. Ensuring that the data is diverse, relevant, and accurately labeled can help to improve the accuracy of the AI system.
- Training methods: Different training methods can also impact the accuracy of AI generated questions and answers. For example, using more advanced techniques such as transfer learning or fine-tuning can help to improve the performance of the AI system.
- Human oversight: Another approach that has been proposed to improve the accuracy of AI generated questions and answers is to include some level of human oversight or review. For example, the AI system could be designed to flag potentially problematic or inaccurate questions and answers for further review by a human expert.
- Explainable AI: Another approach that has been proposed is to develop AI systems that are more transparent and explainable, so that it is easier to understand how the AI system arrived at a particular answer or decision. This can help to improve the trustworthiness and accountability of the AI system.
Overall, there is ongoing research and development in this area, and it is likely that a combination of these and other approaches will be needed to improve the accuracy of AI generated questions and answers.
ChatGPT for CyberSecurity
The concept behind ChatGPT
ChatGPT is a chatbot designed to understand and generate human-like language through the use of natural language processing (NLP) and machine learning techniques. It is based on the GPT (Generative Pre-training Transformer) language model developed by OpenAI, which has been trained on a large dataset of human language in order to better understand how humans communicate.
One of the key concepts behind ChatGPT is the idea of language generation. This refers to the ability of the chatbot to produce coherent and coherently structured responses to user input. To do this, ChatGPT uses a number of different techniques, including natural language generation algorithms, machine learning models, and artificial neural networks. These techniques allow ChatGPT to understand the context and meaning of user input, and generate appropriate responses based on that understanding.
Another important concept behind ChatGPT is the idea of natural language processing (NLP). This refers to the ability of the chatbot to understand and interpret human language, and respond to user input in a way that is natural and easy for humans to understand. NLP is a complex field that involves a number of different techniques and algorithms, including syntactic analysis, semantic analysis, and discourse analysis. By using these techniques, ChatGPT is able to understand the meaning of user input and generate appropriate responses based on that understanding.
Finally, ChatGPT is based on the concept of machine learning, which refers to the ability of computers to learn and adapt to new data and situations. Through the use of machine learning algorithms and models, ChatGPT is able to continually improve its understanding of human language and communication, and generate more human-like responses over time.
GPT-4 is going to launch soon.
And it will make ChatGPT look like a toy…
→ GPT-3 has 175 billion parameters
→ GPT-4 has 100 trillion parameters
I think we’re gonna see something absolutely mindblowing this time!
And the best part? 👇
Average developers (like myself), who are not AI or machine learning experts, will get to use this powerful technology through a simple API.
Think about this for a second…
It’s the most powerful, cutting-edge technology *in the world*, available through a Low-Code solution!
If you’re not already planning on starting an AI-based SaaS or thinking about how to build AI into your current solution…
👉 Start now!
Google unveils its ChatGPT rival
Google on Monday unveiled a new chatbot tool dubbed “Bard” in an apparent bid to compete with the viral success of ChatGPT.
Sundar Pichai, CEO of Google and parent company Alphabet, said in a blog post that Bard will be opened up to “trusted testers” starting Monday February 06th, 2023, with plans to make it available to the public “in the coming weeks.”
Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts.
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our large language models,” Pichai wrote. “It draws on information from the web to provide fresh, high-quality responses.”
The announcement comes as Google’s core product – online search – is widely thought to be facing its most significant risk in years. In the two months since it launched to the public, ChatGPT has been used to generate essays, stories and song lyrics, and to answer some questions one might previously have searched for on Google.
The immense attention on ChatGPT has reportedly prompted Google’s management to declare a “code red” situation for its search business. In a tweet last year, Paul Buchheit, one of the creators of Gmail, forewarned that Google “may be only a year or two away from total disruption” due to the rise of AI.
Microsoft, which has confirmed plans to invest billions OpenAI, has already said it would incorporate the tool into some of its products – and it is rumored to be planning to integrate it into its search engine, Bing. Microsoft on Tuesday is set to hold a news event at its Washington headquarters, the topic of which has yet to be announced. Microsoft publicly announced the event shortly after Google’s AI news dropped on Monday.
The underlying technology that supports Bard has been around for some time, though not widely available to the public. Google unveiled its Language Model for Dialogue Applications (or LaMDA) some two years ago, and said Monday that this technology will power Bard. LaMDA made headlines late last year when a former Google engineer claimed the chatbot was “sentient.” His claims were widely criticized in the AI community.
In the post Monday, Google offered the example of a user asking Bard to explain new discoveries made by NASA’s James Webb Space Telescope in a way that a 9-year-old might find interesting. Bard responds with conversational bullet-points. The first one reads: “In 2023, The JWST spotted a number of galaxies nicknamed ‘green peas.’ They were given this name because they are small, round, and green, like peas.”
Bard can be used to plan a friend’s baby shower, compare two Oscar-nominated movies or get lunch ideas based on what’s in your fridge, according to the post from Google.
Pichai also said Monday that AI-powered tools will soon begin rolling out on Google’s flagship Search tool.
“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web,” Pichai wrote, “whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.”
If Google does move more in the direction of incorporating an AI chatbot tool into search, it could come with some risks. Because these tools are trained on data online, experts have noted they have the potential to perpetuate biases and spread misinformation.
“It’s critical,” Pichai wrote in his post, “that we bring experiences rooted in these models to the world in a bold and responsible way.”
- https://www.theguardian.com/technology/2023/sep/30/authors-shocked-to-find-ai-ripoffs-of-their-books-being-sold-on-amazonby /u/sktafe2020 (Artificial Intelligence Gateway) on October 1, 2023 at 3:09 am
Book spamming, sometimes with multiple bogus titles going online in one day, has hit writers like Rory Cellan-Jones submitted by /u/sktafe2020 [link] [comments]
- [DALL·E 3] Goku taking a dumpby /u/Kali1020 (ChatGPT) on October 1, 2023 at 2:13 am
submitted by /u/Kali1020 [link] [comments]
- It's crazy how many lines right Bing image creator is allowed to cross with pics like these. Better enjoy it quickly before OpenAI nerfs it [NSFW]by /u/Soibi0gn (ChatGPT) on October 1, 2023 at 1:21 am
submitted by /u/Soibi0gn [link] [comments]
- I was testing Dall-E 3 for a Pen & Paper I'm planning (Not sure if NSFW, but to be sure)by /u/Tarzape (ChatGPT) on October 1, 2023 at 1:01 am
submitted by /u/Tarzape [link] [comments]
- [n] Introducing r/AudioAI: Any AI You Can Hear!by /u/chibop1 (Machine Learning) on October 1, 2023 at 12:52 am
I couldn't find any AI sub dedicated to audio, so I’ve created r/AudioAI to serve as a hub for everything at the intersection of artificial intelligence and the world of sounds. AI-driven music, speech, audio production, and all other AI audio technologies. If anyone wants to be part of mod, let me know! submitted by /u/chibop1 [link] [comments]
- AI detection services where I can submit in batches?by /u/YellowPikachu (Artificial Intelligence Gateway) on October 1, 2023 at 12:49 am
I'm doing a project on false positive rate of AI detection on a specific type of written report, and need to check hundreds of reports. Are there AI detection services where I can submit in batches? So far I only have found GPTZero submitted by /u/YellowPikachu [link] [comments]
- Joes Big Fat Cock... #shorts, this is a video I think you will enjoy, the voices are made using AI and also this video does not contain NSFW so don't worry lol, its a joke in the video I don't want to spoilby /u/fabstapizza_YT (Artificial Intelligence Gateway) on October 1, 2023 at 12:44 am
https://www.youtube.com/shorts/eRj77iZ-czI this is a video I think you will enjoy, the voices are made using AI and also this video does not contain NSFW so don't worry lol, its a joke in the video I don't want to spoil submitted by /u/fabstapizza_YT [link] [comments]
- How many out there still don’t have access to the voice update?by /u/cacofonie (ChatGPT) on October 1, 2023 at 12:26 am
I gave gpt plus on iPhone and still waiting. Is it a country thing? Any Canadians with access? submitted by /u/cacofonie [link] [comments]
- I can't login T_Tby /u/Careful-Piano-9168 (ChatGPT) on October 1, 2023 at 12:25 am
submitted by /u/Careful-Piano-9168 [link] [comments]
- What new jobs will AI Art create to compensate for the loss of art as a careerby /u/b_rokal (Artificial Intelligence Gateway) on October 1, 2023 at 12:21 am
The most common argument that I see in favor of pushing forward with AI and work automation is that, although many jobs will be lost, many more will be created. Given advancements in the field of AI art is pretty much granted now that soon enough creating art will be fully automated and it won't make sense for businesses to ever employ digital artists save for minuscular tasks like tweaking AI artwork (which can probably be done by very few artists very quickly, reducing the demand for professionals in the field to almost 0). My question then is that once digital art disappears as a career, what job will AI create in it's place? submitted by /u/b_rokal [link] [comments]
- Go Go Poly Rangers.by /u/GrowlingMutt (ChatGPT) on October 1, 2023 at 12:13 am
Feel free to caption! submitted by /u/GrowlingMutt [link] [comments]
- Just needed a nudgeby /u/waLLxAck1 (ChatGPT) on September 30, 2023 at 11:49 pm
submitted by /u/waLLxAck1 [link] [comments]
- Your POV when The Nightman Comethby /u/ZenArtOfGaming (ChatGPT) on September 30, 2023 at 11:04 pm
submitted by /u/ZenArtOfGaming [link] [comments]
- [P]Handling categorical missing data in churn prediction model for telecom databy /u/guyloveskissing (Machine Learning) on September 30, 2023 at 10:22 pm
I am working on a telecom dataset where I need to fit a model to for predicting churn(yes or no). There are a lot of categorical data with missing values( total values 7043). What is the best way to handle missing data in this case, is it better to ignore it or any other better imputation method? Data columns (total 21 columns): customerID 7043 non-null object gender 7043 non-null object Age 7043 non-null int64 Partner 7043 non-null object Dependents 7043 non-null object tenure 7043 non-null int64 PhoneService 7043 non-null object MultipleLines 6500 non-null object InternetService 6500 non-null object OnlineSecurity 7043 non-null object OnlineBackup 7043 non-null object DeviceProtection 7043 non-null object TechSupport 7043 non-null object StreamingTV 6500 non-null object StreamingMovies 6500 non-null object Contract 6500 non-null object PaperlessBilling 7043 non-null object PaymentMethod 6500 non-null object MonthlyCharges 7043 non-null float64 TotalCharges 7043 non-null object Churn 7043 non-null object submitted by /u/guyloveskissing [link] [comments]
- AI Monthly Rundown September 2023: The Future of LLMs in Search! Are Large Language Models (LLMs) poised to replace traditional search engines? Dive into this comprehensive rundown and discover the evolution and future of search in the age of AI.by /u/enoumen (Artificial Intelligence Gateway) on September 30, 2023 at 9:54 pm
Podcast Video: https://youtu.be/9hmWPza7dQE Explore the latest developments in the AI world for September 2023. We delve into the burning question: Are Large Language Models (LLMs) poised to replace traditional search engines? Dive into this comprehensive rundown and discover the evolution and future of search in the age of AI. Amazon to Invest $4B in Anthropic Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop the industry's most reliable and high-performing foundation models. Anthropic’s frontier safety research and products, together with Amazon Web Services’ (AWS) expertise in running secure, reliable infrastructure, will make Anthropic’s safe and steerable AI widely accessible to AWS customers. AWS will become Anthropic’s primary cloud provider for mission-critical workloads, and this will also expand Anthropic’s support of Amazon Bedrock. Meta to develop a ‘sassy chatbot’ for younger users Meta has plans to develop dozens of chatbot ‘personas’ geared toward engaging young users with more colorful behavior. It also includes ones for celebrities to interact with their fans and some more geared towards productivity, such as to help with coding and other tasks. Meta AI: The new ChatGPT rival was trained on your posts Meta's new AI assistant, a potential rival to ChatGPT, is being trained using public posts from Facebook and Instagram. Meta AI: ChatGPT's Rival Introduction to Meta AI: Launched at Meta Connect 2023, Meta AI aims to become a prominent assistant across platforms such as Instagram, WhatsApp, and Facebook. Capabilities: Beyond just providing information like ChatGPT, it will perform tasks across various platforms and is set to integrate with products like the Ray-Ban Meta smart glasses and Quest 3. Training on Your PostsData: The unique edge of Meta AI comes from its training on public posts from Facebook and Instagram, essentially learning from users' informal content or "sh*tposts." Respecting Privacy: Meta takes care to not use private posts or messages for training, emphasizing the respect of user privacy. The NSA is establishing an “Artificial Intelligence Security Center” The NSA is creating a new center focused on promoting secure AI development and defending U.S. advances from foreign adversaries aiming to co-opt the technology. The AI Security Center: Aims to help spur the secure integration of AI capabilities. Will develop best practices and risk management frameworks.goal is to understand and combat threats to U.S. AI advances. Motivations: The U.S. currently leads in AI, but the advantage is precarious. Adversaries have long stolen intellectual property.Agencies are adopting AI rapidly across missions.I will work with industry, labs, and academia on priorities. It comes after an NSA study showed the need to prioritize security.Must understand AI vulnerabilities and counter-threats. TL;DR: The NSA is establishing an AI Security Center to promote secure development and adoption of AI while defending U.S. progress from adversaries aiming to exploit the technology. LongLoRA: Efficient fine-tuning of long-context LLMs New research has introduced LongLoRA, an ultra-efficient fine-tuning method designed to extend the context sizes of pre-trained LLMs without a huge computation cost.Typically, training LLMs with longer context sizes consumes a lot of time and requires strong GPU resources. For example, extending the context length from 2048 to 8192 increases computational costs 16 times, particularly in self-attention layers. LongLoRA makes it way cheaper by: 1. Using sparse local attention instead of dense global attention (optional at inference time). 2. Using LoRA (Low-Rank Adaptation) for context extension This approach seems both easy to use and super practical. LongLoRA performed strongly on various tasks using LLaMA-2 models ranging from 7B/13B to 70B. Notably, it extended LLaMA-2 7B from 4k context to 100k and LLaMA-2 70B to 32k on a single 8x A100 machine, all while keeping the original model architectures intact. Biggest Boom in AI: ChatGPT Talks and Beyond OpenAI is introducing voice and image capabilities in ChatGPT, allowing users to have voice conversations and show images to ChatGPT. This new feature offers a more intuitive interface and expands the ways in which ChatGPT can be used. Users can have live conversations about landmarks, get recipe suggestions by showing pictures of their fridge, and even receive math problem hints by sharing photos. The voice and image capabilities will be rolled out to Plus and Enterprise users over the next two weeks, with voice available on iOS and Android and images available on all platforms. ChatGPT can now comprehend images, including photos, screenshots, and text-containing documents, using its language reasoning abilities. You can also discuss multiple images and utilize their new drawing tool to guide you. Getty Images’s new AI art tool powered by NVIDIA Getty Images has launched a generative AI art tool called Generative AI, which uses an AI model provided by Nvidia to render images from text descriptions. The tool is designed to be "commercially safer" than rival solutions, with safeguards to prevent disinformation and copyright infringement. Getty Images will compensate contributors whose work is used to train the AI generator and share revenues generated from the tool. The tool can be accessed on Getty's website or integrated into apps and websites through an API, with pricing based on prompt volume. Other companies, including Bria and Shutterstock, are also exploring ethical approaches to generative AI. AWS has announced 5 major generative AI updates and innovations Amazon Bedrock is now generally available.Amazon Titan Embeddings is now generally available. **Meta’s Llama 2 is coming to Amazon Bedrock in the next few weeks. **New Amazon CodeWhisperer capability is coming soon, will allow customers to securely customize CodeWhisperer suggestions using their private code base to unlock new levels of developer productivity. New generative BI authoring capabilities in Amazon QuickSight to help business analysts easily create and customize visuals using natural-language commands. Colossal-AI’s commercial-free LLM saving thousands Colossal-AI has released Colossal-LLaMA-2, an open-source and commercial-free domain-specific language model solution. It uses a relatively small amount of data and training time, resulting in lower costs. The Chinese version of LLaMA-2 has outperformed competitors in various evaluation benchmarks. The release includes improvements such as vocabulary expansion, a data cleaning system, and a multi-stage pre-training scheme to enhance Chinese and English abilities. OpenAI eyes $90B valuation, dives into AI hardware OpenAI is in discussions to possibly sell shares, a a move that would boost its valuation from $29 billion to somewhere between $80 billion and $90 billion, according to a Wall Street Journal report citing people familiar with the talks.In other news, Apple's former design chief, Jony Ive, and OpenAI CEO, Sam Altman, have reportedly been discussing building a new AI hardware device. It is unclear what the device would be or if they will build it, but the duo has been discussing what new hardware for the AI age could look like. Vectara launches Boomerang, the next-gen LLM redefining GenAI accuracy Outpacing major competitors, Boomerang sets a new benchmark in Grounded Generative AI for business applications. It is a next-generation neural information retrieval model integrated into Vectara's GenAI platform.Boomerang surpasses Cohere in benchmark performance and matches OpenAI on certain metrics, excelling particularly in multilingual benchmarks. Notably, it prioritizes security, reducing bias, copyright concerns, and "hallucinations" in AI-generated content. It also offers cross-lingual support for hundreds of languages and dialects and improves prompt understanding, leading to more accurate and faster responses. Google's 25-year AI legacy guides its future AI innovations On its 25th birthday, Google reflected on its two-and-a-half decades of pioneering achievements in the field of AI. It started in 2001 using a simple ML to suggest better spellings for web searches.A standout moment in 2023 was the introduction of PaLM 2 and Gemini. It is now looking forward to these models driving the next quarter-century of its AI advancements. Google’s AI for hyper-personalized Maps Google and DeepMind have built an AI algorithm to make route suggestions in Google Maps more personalized. It includes 360 million parameters and uses real driving data from Maps users to analyze what factors they consider when making route decisions. The AI calculations include information such as travel time, tolls, road conditions, and personal preferences.The approach uses Inverse Reinforcement Learning (IRL), which learns from user behavior, and Receding Horizon Inverse Planning (RHIP), which uses different AI techniques for short- and long-distance travel. Tests show that RHIP improves the accuracy of suggested routes for two-wheelers by 16 to 24 percent and should get better at predicting which route they prefer over time. The Rise and Potential of LLM-Based Agents: A survey Probably the most comprehensive overview of LLM-based agents, this survey-cum-research covers everything from how to construct AI agents to how to harness them for good. It starts by tracing the concept of agents from its philosophical origins to its development in AI and explains why LLMs are suitable foundations for AI agents. It also:Presents a conceptual framework for LLM-based agents that can be tailored to suit different applicationsExplores the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperationDelve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge when they form societies, and the insights they offer for human societyDiscuss a range of key topics and open problems within the fieldHere’s a scenario of an envisioned society composed of AI agents in which humans can also participate. AI makes it easy to personalize 3D-printable models MIT researchers have developed a generative AI-driven tool that enables the user to add custom design elements to 3D models without compromising the functionality of the fabricated objects. A designer could use this tool, called Style2Fab, to personalize 3D models of objects using only natural language prompts to describe their desired design. The user could then fabricate the objects with a 3D printer. Google Bard’s best version yet Google is rolling out Bard’s most capable model yet. Here are the new features:Bard Extensions in English- With Extensions, Bard can find and show you relevant information from the Google tools you use every day — like Gmail, Docs, Drive, Google Maps, YouTube, and Google Flights and hotels — even when the information you need is across multiple apps and services. Bard’s “Google it”- You can now double-check its answers more easily. When you click on the “G” icon, Bard will read the response and evaluate whether there is content across the web to substantiate it.Shared conversations- When someone shares a Bard chat with you through a public link, you can continue the conversation, ask additional questions, or use it as a starting point for new ideas.Expanded access to existing English language features- Access features such as uploading images with Lens, getting Search images in responses, and modifying Bard’s responses– to 40+ languages.These features were possible because of new updates made to the PaLM 2 model. Intel’s ‘AI PC’ can run generative AI chatbots directly on laptops Intel’s new chip, due in December, will be able to run a generative AI chatbot on a laptop rather than having to tap into cloud data centers for computing power. It is made possible by new AI data-crunching features built into Intel's forthcoming "Meteor Lake" laptop chip and from new software tools the company is releasing.Intel also demonstrated laptops that could generate a song in the style of Taylor Swift and answer questions in a conversational style, all while disconnected from the Internet. Moreover, Microsoft's Copilot AI assistant will be able to run on Intel-based PCs. DeepMind’s new AI can predict genetic diseases Google DeepMind’s new system, called AlphaMissense, can tell if the letters in the DNA will produce the correct shape. If not, it is listed as potentially disease-causing.Currently, genetic disease hunters have fairly limited knowledge of which areas of human DNA can lead to disease and have to search across billions of chemical building blocks that make up DNA. They have classified 0.1% of letter changes, or mutations, as either benign or disease-causing. DeepMind's new model pushed that percentage up to 89%. OpenAI unveils DALL·E 3 OpenAI has unveiled its new text-to-image model, DALL·E 3, which can translate nuanced requests into extremely detailed and accurate images. Here’s all you need to know:DALL·E 3 is built natively on ChatGPT, which lets you use ChatGPT to generate tailored, detailed prompts for DALL·E 3. If it’s not quite right, you can ask ChatGPT to make tweaks.Even with the same prompt, DALL·E 3 delivers significant improvements over DALL·E 2, as shown below (Left: DALL·E 2 results, Right: DALL·E 3). The prompt: “An expressive oil painting of a basketball player dunking, depicted as an explosion of a nebula.”OpenAI has taken steps to limit DALL·E 3’s ability to generate violent, adult, or hateful content.DALL·E 3 is designed to decline requests that ask for an image in the style of a living artist. Creators can also opt their images out from training of OpenAI’s future image generation models.DALL·E 3 is now in research preview and will be available to ChatGPT Plus and Enterprise customers in October via the API and in Labs later this fall. Amazon brings Generative AI to Alexa and Fire TV At its annual devices event, Amazon announced a few AI updates:It will soon use a new generative AI model to power improved experiences across its Echo family of devices. The new model is specifically optimized for voice and will take into account body language as well as a person’s eye contact and gestures for more powerful conversational experiences.It also introduced generative AI updates for its Fire TV voice search, which promises to bring more conversational ways to interact with Alexa and discover new content based on specifics. DeepMind’s says language modeling is compression In recent years, the ML community has focused on training increasingly large and powerful self-supervised (language) models. Since these LLMs exhibit impressive predictive capabilities, they are well-positioned to be strong compressors.This interesting research by Google DeepMind and Meta evaluates the compression capabilities of LLMs. It investigates how and why compression and prediction are equivalent. It shows that foundation models, trained primarily on text, are general-purpose compressors due to their in-context learning abilities. For example, Chinchilla 70B achieves compression rates of 43.4% on ImageNet patches and 16.4% on LibriSpeech samples, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively.Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," available at Apple, Google, or Amazon today at https://amzn.to/3ZrpkCu NVIDIA’s new software boosts LLM performance by 8x NVIDIA has developed a software called TensorRT-LLM to supercharge LLM inference on H100 GPUs. It includes optimized kernels, pre- and post-processing steps, and multi-GPU/multi-node communication primitives for high performance. It allows developers to experiment with new LLMs without deep knowledge of C++ or NVIDIA CUDA. The software also offers an open-source modular Python API for easy customization and extensibility. Additionally, it allows users to quantize models to FP8 format for better memory utilization. TensorRT-LLM aims to boost LLM deployment performance and is available in early access, soon to be integrated into the NVIDIA NeMo framework. Users can apply for access through the NVIDIA Developer Program, with a focus on enterprise-grade AI applications. Google Deepmind introduces language models as optimizers Google DeepMind introduces the concept of using language models as optimizers, This work is called Optimization by PROmpting (OPRO). This new approach describes the optimization problem in natural language. The models are trained to generate new solutions based on a defined problem and previously found solutions.This is applied to linear regression, traveling salesman problems, and prompt optimization tasks. The results show that the prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K and up to 50% on Big-Bench Hard tasks. Meta plans to rival OpenAI's GPT-4 with its new model Meta is reportedly planning to train a new chatbot model that it hopes will rival OpenAI's GPT-4. The company is acquiring Nvidia H100 AI-training chips, so they won’t need to rely on Microsoft’s Azure cloud platform to train the new chatbot. Meta is expanding its data centers to create a more powerful chatbot. CEO Mark Zuckerberg wants the model to be free for companies to create AI tools. Meta is building the model to speed up the creation of AI tools that can emulate human expressions. Google's responsible AI leap Google is launching the Digital Futures Project and a $20 million Google.org fund, which will provide grants to leading think tanks and academic institutions worldwide. The project will support researchers, organize convenings, and foster debate on public policy solutions to encourage the responsible development of AI.Inaugural grantees of the Digital Futures Fund include the Aspen Institute, Brookings Institution, Carnegie Endowment for International Peace, the Center for a New American Security, the Institute for Security and Technology, SeedAI, and more. The fund will support institutions from countries around the globe. Microsoft, MIT, and Google transformed entire Project Gutenberg Collection into audiobooks In a new research called Large-Scale Automatic Audiobook Creation, Microsoft, MIT, and Google collaborated to transform the entire Project Gutenberg Collection into audiobooks. The library now boasts thousands of free and open audiobooks powered by AI. Utilizing recent advances in neural text-to-speech, the team achieved exceptional quality of voice acting. The system also allows users to customize an audiobook's speaking speed and style, emotional intonation, and can even match a desired voice using a small amount of sample audio. Amazon, Nvidia, Microsoft, and Google lead hiring surge in GenAI There is an explosive demand for Generative AI talent today. Here are some compelling statistics.The number of companies mentioning “Generative AI” in monthly job postings is increasing exponentially.Tech giants leading the surge in hiring for GenAI talent include Amazon, Nvidia, Oracle, Microsoft, Google, and more. Big banks like Citigroup and CapitalOne are also hiring big in GenAI.Unsurprisingly, technology is the #1 sector looking to hire GenAI experts. Finance is #2nd, and healthcare is #3, while demand has been tepid in sectors like real estate, basic materials, and energy.Companies are paying a lot for GenAI talent! Among all technical skills/technologies tracked, jobs mentioning “Generative AI” or “LLMs” had the highest average base salary offered, with an average of $200,837/year. Apple silently making AI moves Apple is quietly incorporating artificial intelligence into its new iPhones and watches to improve basic functions. The company showcased new gadgets with improved semiconductor designs that power AI features, such as better call quality and image capture. Apple's AI efforts have been reshaping its core software products behind the scenes without explicitly mentioning AI at its developer conference. Apple's new watch chip includes a four-core "Neural Engine" that enhances Siri's accuracy by 25% and enables new ways to interact with the device. The iPhone also automatically recognizes people in the frame for improved image capture. Salesforce’s Einstein can customize AI for you Salesforce introduced Einstein Copilot Studio, which allows customers to customize their AI offerings. The tool consists of three elements: prompt builder, skills builder, and model builder.With the prompt builder, customers can add their own custom prompts for their products or brands. The skills builder enables companies to add actions to prompts, such as competitor analysis or objection handling. The model builder allows customers to bring their own models or use supported third-party offerings.Salesforce is also working on a system called "the Einstein Trust Layer" to address issues like bias and inappropriate responses. NExT-GPT advances human-like AI research The NExT-GPT system is a multimodal language model that can understand and generate content in various modalities, such as text, images, videos, and audio. It fills the gap in existing models by allowing for any multimodal understanding and generation.NExT-GPT leverages pre-trained encoders and decoders, requiring only a small amount of parameter tuning. It also introduces a modality-switching instruction tuning (MosIT) and a curated dataset for complex cross-modal understanding. Meta AI's New Dataset Understands 122 Languages Meta AI announced Belebele, a multilingual reading comprehension dataset with 122 language variants. It allows for evaluating text models in high, medium, and low-resource languages, expanding the language coverage of natural language understanding benchmarks.The Belebele dataset consists of questions based on short passages from the Flores-200 dataset, with four multiple-choice answers. The questions were designed to test different levels of general language comprehension. The dataset enables direct comparison of model performance across all languages and was used to evaluate multilingual masked language models and large language models. The results show that smaller multilingual models perform better in understanding multiple languages. Stability AI’s 1st Japanese Vision-Language Model Stability AI has released Japanese InstructBLIP Alpha, a vision-language model that generates textual descriptions for input images and answers questions about them. It is built upon the Japanese StableLM Instruct Alpha 7B and leverages the InstructBLIP architecture.The model can accurately recognize Japan-specific objects and process text input, such as questions. It is available on Hugging Face Hub for inference and additional training, exclusively for research. This model has various applications, including search engine functionality, scene description, and providing textual descriptions for blind individuals. Transformers as Support Vector Machines This paper establishes a formal equivalence between the optimization geometry of self-attention in transformers and a hard-margin Support Vector Machine (SVM) problem. It shows that optimizing the attention layer of transformers converges towards an SVM solution that minimizes the nuclear norm of the combined parameter. The study also proves the convergence of gradient descent under suitable conditions and introduces a more general SVM equivalence for nonlinear prediction heads. These findings suggest that transformers can be interpreted as a hierarchy of SVMs that separate and select optimal tokens. Amazon’s AI-powered palm recognition breakthrough Amazon One is a fast, convenient, and contactless device that lets customers use the palm of their hand for everyday activities like paying at a store, presenting a loyalty card, verifying their age, or entering a venue. No phone, no wallet.Amazon One does this by combining generative AI, machine learning, cutting-edge biometrics, and optical engineering.Currently, Amazon One is being rolled out to more than 500 Whole Foods Market stores and dozens of third-party locations, including travel retailers, sports and entertainment venues, convenience stores, and grocers. It can also detect fake hands and reject them. It has already been used over 3 million times with 99.9999% accuracy. Intel is going after the AI opportunity in multiple ways Intel is aggressively pursuing opportunities in the AI space by expanding beyond data center-based AI accelerators. CEO Pat Gelsinger believes that AI will move closer to end-users due to economic, physical, and privacy considerations. They are incorporating AI into various products, including server CPUs like Sapphire Rapids, which come with built-in AI accelerators for inference tasks.Furthermore, Intel is set to launch Meteor Lake PC CPUs with dedicated AI hardware to accelerate AI workloads directly on user devices. This approach aligns with Intel's dominant position in the CPU market, making it attractive for software providers to support their AI hardware. Introducing Refact Code LLM, for real-time code completion and chat Refact LLM 1.6B model is primarily for real-time code completion (infill) in multiple programming languages and works as a chat. It achieves the state-of-the-art performance among the code LLMs, coming closer to HumanEval as Starcoder while being 10x smaller in size. It also beats other code models, as shown below. First, a tl;dr1.6b parameters20 programming languages4096 tokens contextcode completion and chat capabilitiespre-trained on permissive licensed code and available for commercial use Google Deepmind’s new AI benchmark on bioinformatics code Google Deepmind and Yale University researchers have introduced BioCoder, a benchmark for testing the ability of AI models to generate bioinformatics-specific code. BioCoder includes 2,269 coding problems based on functions and methods from bioinformatics GitHub repositories. In tests with several code generators, including InCoder, CodeGen, SantaCoder, and ChatGPT, OpenAI's GPT-3.5 Turbo performed exceptionally well in the benchmark. The team plans to explore other open models, such as Meta's LLamA2, in future tests. CityDreamer - New Gen AI model creates unlimited 3D cities CityDreamer is a generative AI model that can create unlimited 3D cities by separating the generation of buildings from other background objects. This allows for better handling of the diverse appearance of buildings in urban environments. The model uses two datasets, OSM and GoogleEarth, to enhance the realism of the generated cities. These datasets provide realistic city layouts and appearances that can be easily scaled to other cities worldwide. Scientists train a neural network to identify PC users’ fatigue Scientists from St. Petersburg University and other organizations have created a database of eye movement strategies of PC users in different states of fatigue. They plan to use this data to train neural network models that can accurately track the functional state of operators, ensuring safety in various industries. The database includes a comprehensive set of indicators collected through sensors such as video cameras, eye trackers, heart rate monitors, and electroencephalographs. Introducing Falcon 180B, largest and most powerful open LLM UAE’s Technology Innovation Institute (TII) has released Falcon 180B, a new state-of-the-art for open models. It is the largest openly available language model, with 180 billion parameters, trained on a massive 3.5 trillion tokens using TII's RefinedWeb dataset. It's currently at the top of the Hugging Face Leaderboard for pre-trained Open LLMs and is available for both research and commercial use.The model performs exceptionally well in various tasks like reasoning, coding, proficiency, and knowledge tests, even beating competitors like Meta's LLaMA 2. Among closed-source models, it ranks just behind OpenAI's GPT 4 and performs on par with Google's PaLM 2 Large, which powers Bard, despite being half the model's size. Apple is spending millions of dollars a day to train AI Reportedly, Apple has been expanding its budget for building AI to millions of dollars a day. It has a unit of around 16 members, including several former Google engineers, working on conversational AI. It is working on multiple AI models to serve a variety of purposes.Apple wants to enhance Siri to be your ultimate digital assistant, doing multi-step tasks without you lifting a finger and using voice commands.It is developing an image generation model and is researching multimodal AI, which can recognize and produce images or video as well as text.A chatbot is in the works that would interact with customers who use AppleCare. Microsoft and Paige to build the largest image-based AI model to fight cancer Paige, a technology disruptor in healthcare, has joined forces with Microsoft to build the world’s largest image-based AI models for digital pathology and oncology.Paige developed the first Large Foundation Model using over one billion images from half a million pathology slides across multiple cancer types. Now, it is developing a new AI model with Microsoft that is orders-of-magnitude larger than any other image-based AI model existing today, configured with billions of parameters.Paige will utilize Microsoft’s advanced supercomputing infrastructure to train the technology at scale and ultimately deploy it to hospitals and laboratories across the globe using Azure. The difference between AI creativity and human creativity, and how it is rapidly narrowing. While many consider human creativity to be truly original and superior in results, it appears boundaries between AI-generated content and human creativity are becoming increasingly blurred. And it's looking increasingly likely that AI may soon be at par with humans in creative content generation. Let's look at a quick comparison between humans and ChatGPT to understand this: Sources: https://enoumen.com/2023/09/02/emerging-ai-innovations-top-trends-shaping-the-landscape-in-september-2023/ --------- Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," available at Apple, Google, or Amazon today at https://amzn.to/3ZrpkCu ------ Simplify Content Creation and Management with Notice Looking for a no-code tool to easily create and publish content? With Notice, generate custom FAQs, blogs, and wikis tailored to your business with AI in a single click.Create, manage, and translate - all in one place. Collaborate with your team, and publish content across platforms, including CMS, HTML, or hosted versions.Plus, you can enjoy cookie-free analytics to gain insights about users and enhance SEO with Notice's smart blocks. Use code DIDYOUNOTICE30SPECIAL for a 30% discount on any subscription.TRY IT & ENJOY 30% OFF at https://notice.studio/?via=etienne submitted by /u/enoumen [link] [comments]
- I didn’t have ChatGPT growing up, but if I did, it would look something like this.by /u/giga (ChatGPT) on September 30, 2023 at 9:41 pm
submitted by /u/giga [link] [comments]
- I literally just asked for an image with the text "here we go!"by /u/monkeyballpirate (ChatGPT) on September 30, 2023 at 9:21 pm
Thoughts on why the image looks like a hyperbolized take on inclusivity and diversity? It makes me wonder if it has to do with the fact ai is programmed to be very inclusive and positive that it kind of sneaks that in by default. submitted by /u/monkeyballpirate [link] [comments]
- I wrote in bing ai the John 3:16 verse and got this image.by /u/No-Coat-7417 (ChatGPT) on September 30, 2023 at 8:56 pm
submitted by /u/No-Coat-7417 [link] [comments]
- chatgpt4 tries to pick up a girl at the barby /u/zalapaauditt (ChatGPT) on September 30, 2023 at 8:56 pm
submitted by /u/zalapaauditt [link] [comments]
- I meant to ask Bing for a vector portrait of Toph from Avatar. Autocorrect went with topography.by /u/GeoffAO2 (ChatGPT) on September 30, 2023 at 8:40 pm
submitted by /u/GeoffAO2 [link] [comments]
- Researchers have invented a method to eliminate AI hallucinations, producing provably correct results based on queries from non-expert users.by /u/JOWWLLL (Artificial Intelligence Gateway) on September 30, 2023 at 8:22 pm
Here's the publication. Fascinating. The pace of AI advancement continues to boggle my mind. submitted by /u/JOWWLLL [link] [comments]
- Hmmmby /u/redditfriendguy (ChatGPT) on September 30, 2023 at 8:09 pm
submitted by /u/redditfriendguy [link] [comments]
- A few neat DALL.E 3 Images created with bingby /u/Cosmagroth (ChatGPT) on September 30, 2023 at 7:39 pm
submitted by /u/Cosmagroth [link] [comments]
- It's there!by /u/AmanChourasia (ChatGPT) on September 30, 2023 at 7:25 pm
submitted by /u/AmanChourasia [link] [comments]
- Are we entering the AUTUMN of CLARITY in AI governance?by /u/CortoMalteze01 (Artificial Intelligence Gateway) on September 30, 2023 at 7:21 pm
After the ChatGPT release in November, there was a WINTER of EXCITEMENT. Then, there was a SPRING of METAPHORS when we attempted to explain the power of AI using analogies to what we already understand. Analogies were more on the side of fear and warnings. You followed a series of 'Recycling Ideas', part of SUMMER of REFLECTIONS. The AUTUMN OF CLARITY has arrived. Follow more... https://www.linkedin.com/pulse/four-ai-seasons-start-autumn-clarity-jovan-kurbalija/?published=t submitted by /u/CortoMalteze01 [link] [comments]
- [D] (How) Can you estimate inference speed of a NN model on given hardware?by /u/teleoflexuous (Machine Learning) on September 30, 2023 at 7:10 pm
How, outside of testing, do you estimate how quickly a specific model will run on some hardware? Anything about time is rarely mentioned in papers and if it is, it's more likely to talk about training, unless authors are specifically proud of their speed (like YOLO). Even less so in any README. Some way to translate numbers of parameters into seconds on a given GPU/CPU, any rules of thumb better than just setting up everything every time? submitted by /u/teleoflexuous [link] [comments]
- DALL-E 3 is leaving me speechless. Look at the camera app on the phone,by /u/zyunztl (ChatGPT) on September 30, 2023 at 7:08 pm
submitted by /u/zyunztl [link] [comments]
- Huge GPT 4 Updateby /u/AGIbydecember2023 (ChatGPT) on September 30, 2023 at 6:26 pm
Context length (memory) has been doubled for the website. From 4k to 8k. At least for me. How do I know? I routinely test memory with openai’s tokenizer which counts the number of words the AI sees in my text. Past 4k the AI forgets past messages. I summarized a 7k piece of text no problem today. The API always had 8k memory but it uses up credits beyond my subscription pay so its not popular for general public. This update is huge. Now GPT should be able to understand whole science papers. For some of you API users this isnt big but it is for changing the general public’s perception of what GPT4 can really do. With this context length you can even do some fine tuning and train new abilities. Unless for some reason im the only one who got this update? If u want to test urself u can search for openai tokenizer, then ask gpt4 “what was my first message?” submitted by /u/AGIbydecember2023 [link] [comments]
- The Gaang. By dall-e 3.by /u/ToadLugosi (ChatGPT) on September 30, 2023 at 5:27 pm
submitted by /u/ToadLugosi [link] [comments]
- AI Weekly Rundown (September 23 to September 29)by /u/RohitAkki (Artificial Intelligence Gateway) on September 30, 2023 at 5:24 pm
Major AI announcements from Meta, Amazon, Google this week. Amazon to Invest $4B in Anthropic - Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop the industry's most reliable and high-performing foundation models. - Anthropic’s frontier safety research and products, together with Amazon Web Services’ (AWS) expertise in running secure, reliable infrastructure, will make Anthropic’s safe and steerable AI widely accessible to AWS customers. AWS will become Anthropic’s primary cloud provider for mission-critical workloads, and this will also expand Anthropic’s support of Amazon Bedrock. Meta to develop a ‘sassy chatbot’ for younger users - Meta has plans to develop dozens of chatbot ‘personas’ geared toward engaging young users with more colorful behavior. It also includes ones for celebrities to interact with their fans and some more geared towards productivity, such as to help with coding and other tasks. LongLoRA: Efficient fine-tuning of long-context LLMs - New research has introduced LongLoRA, an ultra-efficient fine-tuning method designed to extend the context sizes of pre-trained LLMs without a huge computation cost. - Typically, training LLMs with longer context sizes consumes a lot of time and requires strong GPU resources. For example, extending the context length from 2048 to 8192 increases computational costs 16 times, particularly in self-attention layers. LongLoRA makes it way cheaper by: Using sparse local attention instead of dense global attention (optional at inference time). Using LoRA (Low-Rank Adaptation) for context extension This approach seems both easy to use and super practical. LongLoRA performed strongly on various tasks using LLaMA-2 models ranging from 7B/13B to 70B. Notably, it extended LLaMA-2 7B from 4k context to 100k and LLaMA-2 70B to 32k on a single 8x A100 machine, all while keeping the original model architectures intact. Biggest Boom in AI: ChatGPT Talks and Beyond - OpenAI is introducing voice and image capabilities in ChatGPT, allowing users to have voice conversations and show images to ChatGPT. This new feature offers a more intuitive interface and expands the ways in which ChatGPT can be used. - Users can have live conversations about landmarks, get recipe suggestions by showing pictures of their fridge, and even receive math problem hints by sharing photos. The voice and image capabilities will be rolled out to Plus and Enterprise users over the next two weeks, with voice available on iOS and Android and images available on all platforms. - ChatGPT can now comprehend images, including photos, screenshots, and text-containing documents, using its language reasoning abilities. You can also discuss multiple images and utilize their new drawing tool to guide you. Getty Images’s new AI art tool powered by NVIDIA - Getty Images has launched a generative AI art tool called Generative AI, which uses an AI model provided by Nvidia to render images from text descriptions. The tool is designed to be "commercially safer" than rival solutions, with safeguards to prevent disinformation and copyright infringement. - Getty Images will compensate contributors whose work is used to train the AI generator and share revenues generated from the tool. The tool can be accessed on Getty's website or integrated into apps and websites through an API, with pricing based on prompt volume. Other companies, including Bria and Shutterstock, are also exploring ethical approaches to generative AI. Colossal-AI’s commercial-free LLM saving thousands - Colossal-AI has released Colossal-LLaMA-2, an open-source and commercial-free domain-specific language model solution. It uses a relatively small amount of data and training time, resulting in lower costs. The Chinese version of LLaMA-2 has outperformed competitors in various evaluation benchmarks. - The release includes improvements such as vocabulary expansion, a data cleaning system, and a multi-stage pre-training scheme to enhance Chinese and English abilities. OpenAI eyes $90B valuation, dives into AI hardware - OpenAI is in discussions to possibly sell shares, a a move that would boost its valuation from $29 billion to somewhere between $80 billion and $90 billion, according to a Wall Street Journal report citing people familiar with the talks. - In other news, Apple's former design chief, Jony Ive, and OpenAI CEO, Sam Altman, have reportedly been discussing building a new AI hardware device. It is unclear what the device would be or if they will build it, but the duo has been discussing what new hardware for the AI age could look like. Vectara launches Boomerang, the next-gen LLM redefining GenAI accuracy - Outpacing major competitors, Boomerang sets a new benchmark in Grounded Generative AI for business applications. It is a next-generation neural information retrieval model integrated into Vectara's GenAI platform. - Boomerang surpasses Cohere in benchmark performance and matches OpenAI on certain metrics, excelling particularly in multilingual benchmarks. Notably, it prioritizes security, reducing bias, copyright concerns, and "hallucinations" in AI-generated content. It also offers cross-lingual support for hundreds of languages and dialects and improves prompt understanding, leading to more accurate and faster responses. Google's 25-year AI legacy guides its future AI innovations - On its 25th birthday, Google reflected on its two-and-a-half decades of pioneering achievements in the field of AI. It started in 2001 using a simple ML to suggest better spellings for web searches. A standout moment in 2023 was the introduction of PaLM 2 and Gemini. It is now looking forward to these models driving the next quarter-century of its AI advancements. Meta’s new exciting AI experiences & tools - Meta's new AI features include an AI Assistant powered by Bing, It will provide real-time information and generate photorealistic images from text prompts. Meta used specialized datasets to train the AI to respond in a conversational and friendly tone. The first extension of the AI Assistant will be web search. The AI Assistant will be available in beta on WhatsApp, Messenger, and Instagram. - Introduced 28 AI personality chatbots based on celebrities, such as Tom Brady, Naomi Osaka, Mr. Beast, and more. These chatbots, accessible on platforms like WhatsApp, Messenger, and Instagram, provide topic-specific conversations but are currently text-based, with plans to introduce audio capabilities. These AI personalities were created using Llama 2. Meta aims to integrate Bing search functionality in the future. The chatbots' animations are generated through AI techniques, offering a cohesive visual experience. - Launching AI Studio, a platform allowing businesses to build AI chatbots for Facebook, Instagram, and Messenger, initially focusing on Messenger for e-commerce and customer support apps. This toolkit will be available in alpha. - Gen AI stickers powered by Emu allow users to create unique stickers across its messaging apps. Users can type in their desired image descriptions, and Emu generates multiple sticker options in just a few seconds. Initially available to English-language users, this feature will roll out over the next month. - Introducing 2 new AI Instagram features, restyle and backdrop. Restyle allows users to transform the visual styles of their images by entering prompts like "watercolor" or more. While backdrop changes the background of photos using prompts. - Launches New-gen Ray-Ban smart glasses, in partnership with EssilorLuxottica, will feature improved audio and cameras, over 150 different custom frame and lens combinations. They’re lighter and more comfortable. Will enable livestream to Facebook or Instagram and use “Hey Meta” to engage with Meta AI assistant by voice. OpenAI links ChatGPT with Internet - ChatGPT is back with internet browsing, It can now browse the internet to provide current & reliable information, along with direct links to sources. This update addresses feedback received since the browsing feature was launched in May. The model now follows robots.txt and identifies user agents to respect website preferences. - Currently available to Plus and Enterprise users, browsing will be expanded to all users soon. To try it out, enable Browse in your beta features setting: Click on 'Profile & Settings’ > Select 'Beta features' > Toggle on ‘Browse with Bing’ > Choose Browse with Bing in the selector under GPT-4. Mistral AI’s LLM outperforms Meta’s Llama2 13B - Mistral AI, Europe's largest seeded startup, has released its first LLM Mistral 7B. This model outperforms Meta's Llama 2 13B and is touted as the most powerful language model for its size. It was founded by alums from Google's DeepMind and Meta earlier this year. It aims to make AI useful for enterprises by using publicly available data and customer contributions. - Mistral 7B excelled in benchmarks, surpassing Llama 2 7B and 13B in text summarization, classification, and code completion tasks. The only area where Llama 2 13B matched Mistral 7B was world knowledge testing. AWS announces powerful new AI offerings Amazon Web Services (AWS) has announced 5 major generative AI updates and innovations. Amazon Bedrock is now generally available. It is a fully managed service that makes foundation models (FMs) from leading AI companies available through a single API. It also has new AI models in the mix and will help more customers build and scale generative AI applications. Amazon Titan Embeddings is now generally available. It is an LLM that makes it easier for customers to start with Retrieval-Augmented Generation (RAG) to extend the power of any FM using their proprietary data. Meta’s Llama 2 is coming to Amazon Bedrock in the next few weeks. Amazon Bedrock is the first fully managed generative AI service to offer Llama 2 through a managed API. Currently, it includes models from 21 Labs, Anthropic, Cohere, Stability AI, and Amazon. New Amazon CodeWhisperer capability is coming soon. It will allow customers to securely customize CodeWhisperer suggestions using their private code base to unlock new levels of developer productivity. Trained on billions of lines of Amazon and publicly available code, Amazon CodeWhisperer is an AI-powered coding companion. New Generative BI authoring capabilities to extend the natural-language querying of Amazon QuickSight Q beyond answering well-structured questions. It will help analysts quickly create customizable visuals from question fragments, clarify the intent of a query by asking follow-up questions, refine visualizations, and complete complex calculations. Meta introduces LLAMA 2 Long - In a new research, Meta presents a series of long-context LLMs that support effective context windows of up to 32,768 tokens. The models are built through continual pretraining from Llama 2 with longer training sequences and on a dataset where long texts are upsampled. - On research benchmarks, the models achieve consistent improvements on most regular tasks and significant improvements on long-context tasks over Llama 2. Notably, with a cost-effective instruction tuning procedure that does not require human-annotated long instruction data, the 70B variant can already surpass gpt-3.5-turbo-16k's overall performance on a suite of long-context tasks. Google announces Google-Extended and opens SGE to teens - Google introduced Google-Extended, a new control that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of models that power those products. This will allow publishers to control access to content on their site to train these AI models. - In another update, Google has opened up access to SGE in Search Labs to more people, specifically teens (ages 13-17) in the U.S., so they too can benefit from generative AI's helpful capabilities. Informed by research and experts in teen development, Google has built additional safeguards into the experience. For instance, to prevent inappropriate or harmful content from surfacing. And there was more… Microsoft’s mobile keyboard app SwiftKey gains new AI-powered features: It will now include AI camera lenses, AI stickers, an AI-powered editor, and the ability to create AI images from the app. Google Pixel 8’s latest leak shows off big AI camera updates: AI photo editing with Magic Editor will enable you to remake any picture you take. DSLR-style manual camera controls will let you tweak the shutter speed and ISO of an image and a focus slider. A drinks company in Poland appoints AI robot as 'experimental’ CEO: Dictador, best known for its rums, has appointed the robot to oversee the company’s growth into one-off collectables, communication, or even strategy planning. It is named Mika. ElevenLabs launches free book classics narrated by high-quality AI voices: It presents 6 classic stories told by compelling AI voices in multiple languages, including "Winnie the Pooh" and "The Picture of Dorian Gray." The entire recording process took only one day. Salesforce to acquire Airkit.ai, a low-code platform to build AI customer service agents: The GPT-4-based platform allows e-commerce companies to build specialized customer service chatbots that can deal with queries around order status, refunds, product information, and more. Tesla’s humanoid robot Optimus can now sort objects autonomously: Using its end-to-end trained neural network. The robot can calibrate itself using joint position encoders and vision to locate its limbs precisely. It can then sort colored blocks into their respective trays, even adapting to dynamic changes in the environment. Snapchat partners with Microsoft to insert ads into its AI chatbot feature, My AI: It offers link suggestions related to user conversations. The partnership is a win for Microsoft's ads business and could position Snapchat as a platform for Gen Z users to search for products and services through AI chats. Spotify is testing a voice translation feature for podcasts, using AI to translate content into different languages: By offering translated podcasts from popular hosts like Dax Shepard and Lex Fridman, Spotify hopes to expand its global reach and cater to a wider audience. Google's Bard now has new capabilities to help travelers plan their vacations: Connecting with various Google applications like Gmail, Google Flights, and Google Maps, It can provide personalized assistance throughout the trip. Users can ask Bard to find flight and hotel information, get directions, watch YouTube videos, and even check dates that work for everyone involved. Correcto has raised $7M in seed funding to expand its language writing tool for Spanish speakers: While AI tools like ChatGPT can generate text in Spanish, Correcto believes its tool offers better quality and provides opportunities for individual learning. The company plans to target enterprise customers while offering a freemium version for individual users. SAP launches its own enterprise AI assistant, Joule: Built into the entirety of SAP’s extensive cloud enterprise suite, Joule will allow customers to access it across SAP apps and programs, similar to Microsoft’s new Windows Copilot. It will also be available across computing platforms, on desktop and mobile. Microsoft uses AI to boost Windows 11 security, pushes for passwordless future: It announced new enterprise security features that use AI to help defend Windows 11 against increasingly sophisticated cyberattacks. The new AI capabilities may reduce security incidents by 60% and firmware attacks by 300%. Shopify releases SDXL background replacement tool for product imagery: It is a super helpful tool that can create a whole new reality around your product. Its public HF Space is under the official Shopify account. Infosys ties with Microsoft for industry-wide adoption of generative AI: The collaboration aims to develop AI solutions, leveraging Infosys Topaz, Azure OpenAI Service, and Azure Cognitive Services. The integrated solutions will enhance enterprise functions and accelerate the democratization of data and intelligence. Hollywood studios can train AI models on writers’ work under tentative deal: Writers are expected to be guaranteed credit and compensation for work they do on scripts, even if studios partially use AI tools. OpenAI partners with WHOOP to launch WHOOP Coach, an advanced-gen AI feature for wearables. It uses OpenAI's GPT-4 system to provide personalized recommendations & guidance for health and fitness. The feature analyzes WHOOP data, sports science, and individual body information to generate personalized answers. Cloudflare launched new AI tools to help customers build, deploy, and run AI models at the network edge. The first tool, Workers AI, allows customers to access nearby GPUs on a pay-as-you-go basis. Another tool, Vectorize, provides a vector database to store mathematical representations of data. The third tool, AI Gateway, offers metrics to help customers manage the costs of running AI apps. Microsoft & Mercy partners for Clinician Empowerment with Gen AI. The partnership allowed Mercy to make real-time clinical decisions & improve patient care. They are exploring over four dozen uses of AI and plan to launch multiple new AI use cases by next year to enhance patient and co-worker experiences. Adobe has officially launched Photoshop on the web, a simplified online version of its popular desktop photo editing app. The web version includes AI tools such as Generative Fill and Generative Expand, powered by Adobe's Firefly generative AI model. These tools allow users to manipulate images using text-based descriptions in over 100 languages. Microsoft plans to use nuclear energy to power its AI data centers: The company is recruiting a "principal program manager for nuclear technology" to evaluate the feasibility of using nuclear energy to support the energy demands of hosting AI models. The company sees nuclear energy as a viable option to address the escalating energy demand of running AI models like ChatGPT. Spotify is adding auto-generated transcripts to millions of podcasts: The transcript feature will expand to more podcasters on Spotify and include time-synced text. In the future, creators could add media to transcripts– a useful feature if a creator is describing an image on the show, for example. Zapier launches Canvas, an AI-powered flowchart tool: It will help its users plan and diagram their business-critical processes, with AI to help them turn those processes into Zapier-based automations. Canvas is now in early access. Microsoft opens AI Co-Innovation Lab in San Francisco to empower Bay Area startups The lab’s main goal is to facilitate the transition from ideation to prototyping, providing companies with the resources and guidance they need to refine their AI-based concepts. Cohere jumps into the fray of the AI chatbot race by releasing a new API: The Chat API with RAG will allow third-party developers of other enterprises to build powerful chat applications based off Cohere’s proprietary generative LLM, Command. Mayo Clinic to deploy and test Microsoft generative AI tools: Mayo Clinic is among the first healthcare organizations to deploy Microsoft 365 Copilot. It is testing the Early Access Program with hundreds of its clinical staff, doctors, and healthcare workers. More detailed breakdown of these news and innovations in the daily newsletter. submitted by /u/RohitAkki [link] [comments]