A Daily Chronicle of AI Innovations in December 2023

A daily chronicle of AI innovations in December 2023

Master AI Machine Learning PRO
Elevate Your Career with AI & Machine Learning For Dummies PRO
Ready to accelerate your career in the fast-growing fields of AI and machine learning? Our app offers user-friendly tutorials and interactive exercises designed to boost your skills and make you stand out to employers. Whether you're aiming for a promotion or searching for a better job, AI & Machine Learning For Dummies PRO is your gateway to success. Start mastering the technologies shaping the future—download now and take the next step in your professional journey!

Download on the App Store

Download the AI & Machine Learning For Dummies PRO App:
iOS - Android
Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:

Navigating the Future: A Daily Chronicle of AI Innovations in December 2023.

Join us at ‘Navigating the Future,’ your premier destination for unparalleled perspectives on the swift progress and transformative changes in the Artificial Intelligence landscape throughout December 2023. In an era where technology is advancing faster than ever, we immerse ourselves in the AI universe to provide you with daily insights into groundbreaking developments, significant industry shifts, and the visionary thinkers forging our future. Embark with us on this exciting adventure as we uncover the wonders and significant achievements of AI, each and every day.

Ace the AWS Cloud Practitioner Certification CCP CLF-C02 Exam with GPT
Prepare and Ace the AWS Cloud Practitioner Certification CCP CLF-C02: FREE AWS CCP EXAM PREP GPT

AI – 2023, a year in review

Well, we are nearly at the end of one of my all time favourite years of being on this planet. Here’s what’s happened in AI in the last 12 months.

January:

  • Microsoft’s staggering $10 Billion investment in OpenAI makes waves. (Link)

  • MIT researchers develop AI that predicts future lung cancer risk. (Link)

February:

  • ChatGPT reached 100 million unique users. (Link)
  • Google announced Bard, a conversational Gen AI chatbot powered by LaMDA. (Link)
  • Microsoft launched a new Bing Search Engine integrated with ChatGPT. (Link)
  • AWS joined forces with Hugging Face to empower AI developers. (Link)
  • Meta announced LLaMA, A 65B parameter LLM. (Link)
  • Spotify introduced their AI feature called “DJ.” (Link)
  • Snapchat announces their AI chatbot ‘My AI’. (Link)
  • OpenAI introduces ChatGPT Plus, a premium chatbot service.

  • Microsoft’s new AI-enhanced Bing Search debuts.

March:

  • Adobe gets into the generative AI game with Firefly. (Link)
  • Canva introduced AI design tools focused on helping workplaces. (Link)
  • OpenAI announces GPT-4, accepting text + image inputs. (Link)
  • OpenAI has made available APIs for ChatGPT & launched Whisper. (Link)
  • HubSpot Introduced new AI tools to boost productivity and save time. (Link)
  • Google integrated Al into the Google Workspace. (Link)
  • Microsoft combines the power of LLMs with your data. (Link)
  • GitHub launched its AI coding assistant, Copilot X. (Link)
  • Replit and Google Cloud partner to Advance Gen AI for Software Development. (Link)
  • Midjourney’s Version 5 was out! (Link)
  • Zoom released an AI-powered assistant, Zoom IQ. (Link)
  • Midjourney’s V5 elevates AI-driven image creation.

  • Microsoft rolls out Copilot for Microsoft 365.

  • Google launches Bard, a ChatGPT competitor.

April:

  • AutoGPT unveiled the next-gen AI designed to perform tasks without human intervention. (Link)
  • Elon Musk was working on ‘TruthGPT.’ (Link)
  • Apple was building a paid AI health coach, which might arrive in 2024. (Link)
  • Meta released a new image recognition model, DINOv2. (Link)
  • Alibaba announces its LLM, ChatGPT Rival “Tongyi Qianwen”. (Link)
  • Amazon releases AI Code Generator – Amazon CodeWhisperer. (Link)
  • Google’s Project Magi: A team of 160 working on adding new features to the search engine. (Link)
  • Meta introduced: Segment Anything Model – SAM (Link)
  • NVIDIA Announces NeMo Guardrails to boost the safety of AI chatbots like ChatGPT. (Link)
  • Elon Musk and Steve Wozniak lead a petition against AI models surpassing GPT-4.

May:

  • Microsoft’s Windows 11 AI Copilot. (Link)
  • Sanctuary AI unveiled Phoenix™, its sixth-generation general-purpose robot. (Link)
  • Inflection AI Introduces Pi, the personal intelligence. (Link)
  • Stability AI released StableStudio, a new open-source variant of its DreamStudio. (Link)
  • OpenAI introduced the ChatGPT app for iOS. (Link)
  • Meta introduces ImageBind, a new AI research model. (Link)
  • Google unveils PaLM 2 AI language model. (Link)
  • Geoffrey Hinton, The Godfather of A.I., leaves Google and warns of danger ahead. (Link)
  • Samsung leads a corporate ban on Gen AI tools over security concerns.

  • OpenAI adds plugins and web browsing to ChatGPT.

  • Nvidia’s stock soars, nearing $1 Trillion market cap.

June:

  • Apple introduces Apple Vision Pro. (Link)
  • McKinsey’s study finds that AI could add up to $4.4 trillion a year to the global economy. (Link)
  • Runway’s Gen-2 officially released. (Link)
  • Adobe introduces Firefly, an advanced image generator.

  • Accenture announces a colossal $3 billion AI investment.

July:

  • Apple trials a ChatGPT-like AI Chatbot, ‘Apple GPT’. (Link)
  • Meta introduces Llama2, the next-gen of open-source LLM. (Link)
  • Stack Overflow announced OverflowAI. (Link)
  • Anthropic released Claude 2, with 200K context capability. (Link)
  • Google is building an AI tool for journalists. (Link)
  • ChatGPT adds code interpretation and data analysis.

  • Stack Overflow sees traffic halved by Gen AI coding tools.

August:

  • OpenAI expands ChatGPT ‘Custom Instructions’ to free users. (Link)
  • YouTube runs a test with AI auto-generated video summaries. (Link)
  • MidJourney Introduces Vary Region Inpainting feature. (Link)
  • Meta’s SeamlessM4T, can transcribe and translate close to 100 languages. (Link)
  • Tesla’s new powerful $300 million AI supercomputer is in town! (Link)
  • Salesforce backs OpenAI rival Hugging Face with over $4 Billion.

  • ChatGPT Enterprise launches for business use.

September:

  • OpenAI upgrades ChatGPT with web browsing capabilities. (Link)
  • Stability AI’s first product for music + sound effect generation, Stable Audio. (Link)
  • YouTube launched YouTube Create, a new app for mobile creators. (Link)
  • Coca-Cola launched a New AI-created flavor. (Link)
  • Mistral AI launches open-source LLM, Mistral 7B. (Link)
  • Amazon supercharged Alexa with generative AI. (Link)
  • Microsoft open sources EvoDiff, a novel protein-generating AI. (Link)
  • OpenAI upgraded ChatGPT with voice and image capabilities. (Link)
  • OpenAI releases Dall-E 3 and multimodal ChatGPT features.

  • Meta brings AI chatbots to its platforms and more.

October:

  • DALL·E 3 made available to all ChatGPT Plus and Enterprise users. (Link)
  • Amazon unveiled the humanoid robot, ‘Digit’. (Link)
  • ElevenLabs launches Voice Translation Tool to help overcome language barriers. (Link)
  • Google tested new ways to get more done right from Search. (Link)
  • Rewind Pendant: New AI wearable captures real-world conversations. (Link)
  • LinkedIn introduces new AI products & tools. (Link)
  • Google’s new Pixel phones feature Gen AI.

  • Epik app’s AI tech reignites 90s nostalgia.

  • Baidu enters the AI race with its ChatGPT alternative.

November:

  • The first-ever AI Safety Summit was hosted by the UK. (Link)
  • OpenAI’s New models and products were announced at DevDay. (Link)
  • Humane officially launches the AI Pin. (Link)
  • Elon Musk launches Grok, a new xAI chatbot to rival ChatGPT. (Link)
  • Pika Labs Launches ‘Pika 1.0’. (Link)
  • Google DeepMind and YouTube revealed a new AI model called ‘Lyria’. (Link)
  • OpenAI delays the launch of the custom GPT store to early 2024. (Link)
  • Stable video diffusion is available on the Stability AI platform API. (Link)
  • Amazon announced Amazon Q, the AI-powered assistant from AWS. (Link)
  • Samsung unveils its own AI, ‘Gauss,’ that can generate text, code, and images. (Link)
  • Sam Altman was fired and rehired by OpenAI. (Know What Happened the Night Before Altman’s Firing?)
  • OpenAI presents Custom GPTs and GPT-4 Turbo.

  • Ex-Apple team debuts the Humane Ai Pin.

  • Nvidia’s H200 chips to power future AI.

  • OpenAI’s Sam Altman in a surprising hire-fire-rehire saga.

December:

  • Google launched Gemini, an AI model that rivals GPT-4. (Link)
  • AMD releases Instinct MI300X GPU and MI300A APU chips. (Link)
  • Midjourney V6 out! (Link)
  • Mistral’s new launch Mixtral 8x7B: A leading open SMoE model. (Link)
  • Microsoft Released Phi-2, a SLM that beats LIama 2. (Link)
  • OpenAI is reportedly about to raise additional funding at a $100B+ valuation. (Link)
  • Pika Labs’ Pika 1.0 heralds a new age in AI video generation.

  • Midjourney’s V6 update takes AI imagery further.

Djamgatech GPT Store
Djamgatech GPT Store

A Daily Chronicle of AI Innovations in December 2023 – Day 30: AI Daily News – December 30th, 2023

🤖 LG unveils a two-legged AI robot

📝 Former Trump lawyer cited fake court cases generated by AI

📱 Microsoft’s Copilot AI chatbot now available on iOS

🤖 LG unveils a two-legged AI robot  Source

  • LG unveils a new AI agent, an autonomous robot designed to assist with household chores using advanced technologies like voice and image recognition, natural language processing, and autonomous mobility.
  • The AI agent is equipped with the Qualcomm Robotics RB5 Platform, features a built-in camera, speaker system, and sensors, and can control smart home devices, monitor pets, and enhance security by patrolling the home and sending alerts.
  • LG aims to enhance the smart home experience by having the AI agent greet users, interpret their emotions, and provide personalized assistance, with plans to showcase this technology at the CES.

📱 Microsoft’s Copilot AI chatbot now available on iOS Source

  • Microsoft launched its Copilot app, the iOS counterpart to its Android app, providing access to advanced AI features on Apple devices.
  • The Copilot app allows users to ask questions, compose emails, summarize text, and generate images with DALL-E3 integration.
  • Copilot offers users the more advanced GPT-4 technology for free, unlike ChatGPT which requires a subscription for its latest model.

Silicon Valley eyes reboot of Google Glass-style headsets.LINK

SpaceX launches two rockets—three hours apart—to close out a record year.LINK

Soon, every employee will be both AI builder and AI consumer.LINK

Yes, we’re already talkin’ Apple Vision Pro 2 — how it’s reportedly ‘better’ than the first.LINK

Looking for an AI-safe job? Try writing about wine.LINK

A Daily Chronicle of AI Innovations in December 2023 – Day 29: AI Daily News – December 29th, 2023

💻 Microsoft’s first true ‘AI PCs’

💸 Google settles $5 billion consumer privacy lawsuit

🇨🇳 Nvidia to launch slower version of its gaming chip in China


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

🔋 Amazon plans to make its own hydrogen to power vehicles

🤖 How AI-created “virtual influencers” are stealing business from humans

💻 Microsoft’s first true ‘AI PCs’  Source

  • Microsoft’s upcoming Surface Pro 10 and Surface Laptop 6 are reported to be the company’s first ‘AI PCs’, featuring new neural processing units and support for advanced AI functionalities in the next Windows update.
  • The devices will offer options between Qualcomm’s Snapdragon X chips for ARM-based models and Intel’s 14th-gen chips for Intel versions, aiming to boost AI performance, battery life, and security.
  • Designed with AI integration in mind, the Surface Pro 10 and Surface Laptop 6 are anticipated to include enhancements like brighter, higher-resolution displays and interfaces like a Windows Copilot button for AI-assisted tasks.

🇨🇳 Nvidia to launch slower version of its gaming chip in China  Source

  • Nvidia launched the GeForce RTX 4090 D, a gaming chip for China that adheres to U.S. export controls.
  • The new chip is 5% slower than the banned RTX 4090 but still aims to provide top performance for Chinese consumers.
  • With a 90% market share in China’s AI chip industry, the export restrictions may open opportunities for domestic competitors like Huawei.

 Amazon plans to make its own hydrogen to power vehicles  Source

Amazon plans to make its own hydrogen to power vehicles
Amazon plans to make its own hydrogen to power vehicles
  • Amazon is collaborating with Plug Power to produce hydrogen fuel on-site at its fulfillment center in Aurora, Colorado to power around 225 forklifts.
  • The environmental benefits of using hydrogen are under scrutiny as most hydrogen is currently produced from fossil fuels, but Amazon aims for cleaner processes by 2040.
  • While aiming for greener hydrogen, Amazon’s current on-site production still involves greenhouse gas emissions due to the use of grid-tied, fossil-fuel-based electricity.

 How AI-created “virtual influencers” are stealing business from humans  Source

  • Aitana Lopez, a pink-haired virtual influencer with over 200,000 social media followers, is AI-generated and gets paid by brands for promotion.
  • Human influencers fear income loss due to competition from these digital avatars in the $21 billion content creation economy.
  • Virtual influencers have fostered high-profile brand partnerships and are seen as a cost-effective alternative to human influencers.

In this video, the author talks about Multimodal LLMs, Vector-Quantized Variational Autoencoders (VQ-VAEs), and how modern models like Google’s Gemini, Parti, and OpenAI’s Dall E generate images together with text. He tried to cover a lot of bases starting from the very basics (latent space, autoencoders), all the way to more complex topics (like VQ-VAEs, codebooks, etc).

A Daily Chronicle of AI Innovations in December 2023 – Day 28: AI Daily News – December 28th, 2023

🕵️‍♂️ LLM Lie Detector catches AI lies
🌐 StreamingLLM can handle unlimited input tokens
📝 DeepMind’s Promptbreeder automates prompt engineering
🧠 Meta AI decodes brain speech ~ 73% accuracy
🚗 Wayve’s GAIA-1 9B enhances autonomous vehicle training
👁️‍🗨️ OpenAI’s GPT-4 Vision has a new competitor, LLaVA-1.5
🚀 Perplexity.ai and GPT-4 can outperform Google Search
🔍 Anthropic’s latest research makes AI understandable
📚 MemGPT boosts LLMs by extending context window
🔥 GPT-4V got even better with Set-of-Mark (SoM)

The LLM Scientist Roadmap

No alt text provided for this image

Just came across the most comprehensive LLM course on github.

It covers various articles, roadmaps, Colab notebooks, and other learning resources that help you to become an expert in the field:

➡ The LLM architecture
➡ Building an instruction dataset
➡ Pre-training models
➡ Supervised fine-tuning
➡ Reinforcement Learning from Human Feedback
➡ Evaluation
➡ Quantization
➡ Inference optimization

Repo (3.2k stars): https://github.com/mlabonne/llm-course

LLM Lie Detector catching AI lies

This paper discusses how LLMs can “lie” by outputting false statements even when they know the truth. The authors propose a simple lie detector that does not require access to the LLM’s internal workings or knowledge of the truth. The detector works by asking unrelated follow-up questions after a suspected lie and using the LLM’s yes/no answers to train a logistic regression classifier.

The lie detector is highly accurate and can generalize to different LLM architectures, fine-tuned LLMs, sycophantic lies, and real-life scenarios.

Why does this matter?

The proposed lie detector seems to provide a practical means to address trust-related concerns, enhancing transparency, responsible use, and ethical considerations in deploying LLMs across various domains. Which will ultimately safeguard the integrity of information and societal well-being.

Source

StreamingLLM for efficient deployment of LLMs in streaming applications

Deploying LLMs in streaming applications, where long interactions are expected, is urgently needed but comes with challenges due to efficiency limitations and reduced performance with longer texts. Window attention provides a partial solution, but its performance plummets when initial tokens are excluded.

Recognizing the role of these tokens as “attention sinks”, new research by Meta AI (and others) has introduced StreamingLLM– a simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning. By adding attention sinks with recent tokens, it can efficiently model texts of up to 4M tokens. It further shows that pre-training models with a dedicated sink token can improve the streaming performance.

Here’s an illustration of StreamingLLM vs. existing methods. It firstly decouples the LLM’s pre-training window size and its actual text generation length, paving the way for the streaming deployment of LLMs.

Why does this matter?

The ability to deploy LLMs for infinite-length inputs without sacrificing efficiency and performance opens up new possibilities and efficiencies in various AI applications.

Source

Samsung unveils a new AI fridge that scans food inside to recommend recipes, featuring a 32-inch screen with app integrations. Source

Researchers developed an “electronic tongue” with sensors and deep-learning to accurately measure and analyze complex tastes, with successful wine taste profiling. Source

Resources:

6 unexpected lessons from using ChatGPT for 1 year that 95% ignore

ChatGPT has taken the world by a storm, and billions have rushed to use it – I jumped on the wagon from the start, and as an ML specialist, learned the ins and outs of how to use it that 95% of users ignore.Here are 6 lessons learned over the last year to supercharge your productivity, career, and life with ChatGPT

1. ChatGPT has changed a lot making most prompt engineering techniques useless: The models behind ChatGPT have been updated, improved, fine-tuned to be increasingly better. The Open AI team worked hard to identify weaknesses in these models published across the web and in research papers, and addressed them.

A few examples: one year ago, ChatGPT was (a) bad at reasoning (many mistakes), (b) unable to do maths, and (c) required lots of prompt engineering to follow a specific style.

All of these things are solved now – (a) ChatGPT breaks down reasoning steps without the need for Chain of Thought prompting. (b) It is able to identify maths and to use tools to do maths (similar to us accessing calculators), and (c) has become much better at following instructions.

This is good news – it means you can focus on the instructions and tasks at hand instead of spending your energy learning techniques that are not useful or necessary.

2. Simple straightforward prompts are always superior: Most people think that prompts need to be complex, cryptic, and heavy instructions that will unlock some magical behavior. I consistently find prompt engineering resources that generate paragraphs of complex sentences and market those as good prompts. Couldn’t be further from the truth.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

People need to understand that ChatGPT, and most Large Language Models like Bard/Gemini are mathematical models that learn language from looking at many examples, then are fine-tuned on human generated instructions.

This means they will average out their understanding of language based on expressions and sentences that most people use. The simpler, more straightforward your instructions and prompts are, the higher the chances of ChatGPT understanding what you mean.

Drop the complex prompts that try to make it look like prompt engineering is a secret craft. Embrace simple, straightforward instructions. Rather, spend your time focusing on the right instructions and the right way to break down the steps that ChatGPT has to deliver (see next point!)

3. Always break down your tasks into smaller chunks: Everytime I use ChatGPT to operate large complex tasks, or to build complex code, it makes mistakes. If I ask ChatGPT to make a complex blogpost in one go, this is a perfect recipe for a dull, generic result. This is explained by a few things:

a) ChatGPT is limited by the token size limit meaning it can only take a certain amount of inputs and produce a specific amount of outputs.

b) ChatGPT is limited by its reasoning capabilities, the more complex and multi dimensional a task becomes, the more likely ChatGPT will forget parts of it, or just make mistakes.

Instead, you should break down your tasks as much as possible, making it easier for ChatGPT to follow instructions, deliver high quality work, and be guided by your unique spin.

Example: instead of asking ChatGPT to write a blog about productivity at work, break it down as follows – Ask ChatGPT to:

  • Provide ideas about the most common ways to boost productivity at work

  • Provide ideas about unique ways to boost productivity at work

  • Combine these ideas to generate an outline for a blogpost directed at your audience

  • Expand each section of the outline with the style of writing that represents you the best

  • Change parts of the blog based on your feedback (editorial review)

  • Add a call to action at the end of the blog based on the content of the blog it has just generated

This will unlock a much more powerful experience than to just try to achieve the same in one or two steps – while allowing you to add your spin, edit ideas and writing style, and make the piece truly yours.

4. Bard is superior when it comes to facts: while ChatGPT has consistently outperformed Bard on aspects such as creativity, writing style, and even reasoning, if you are looking for facts (and for the ability to verify facts) – Bard is unbeatable.With its access to Google Search, and its fact verification tool, Bard can check and surface sources making it easier than ever to audit its answers (and avoid taking hallucinations as truths!).

If you’re doing market research, or need facts, get those from Bard.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

5. ChatGPT cannot replace you, it’s a tool for you – the quicker you get this, the more efficient you’ll become: I have tried numerous times to make ChatGPT do everything on my behalf when creating a blog, when coding, or when building an email chain for my ecommerce businesses. This is the number one error most ChatGPT users make, and will only render your work hollow, empty from any soul, and let’s be frank, easy to spot.

Instead, you must use ChatGPT as an assistant, or an intern. Teach it things. Give it ideas. Show it examples of unique work you want it to reproduce. Do the work of thinking about the unique spin, the heart of the content, the message. It’s okay to use ChatGPT to get a few ideas for your content or for how to build specific code, but make sure you do the heavy lifting in terms of ideation and creativity – then use ChatGPT to help execute.

This will allow you to maintain your thinking/creative muscle, will make your work unique and soulful (in a world where too much content is now soulless and bland), while allowing you to benefit from the scale and productivity that ChatGPT offers.

6. GPT4 is not always better than GPT3.5: it’s normal to think that GPT4, being a newer version of Open AI models, will always outperform GPT3.5. But this is not what my experience shows. When using GPT models, you have to keep in mind what you’re trying to achieve.There is a trade-off between speed, cost, and quality. GPT3.5 is much (around 10 times) faster, (around 10 times) cheaper, and has on par quality for 95% of tasks in comparison to GPT4.In the past, I used to jump on GPT4 for everything, but now I use most intermediary steps in my content generation flows using GPT3.5, and only leave GPT4 for tasks that are more complex and that demand more reasoning.Example: if I am creating a blog, I will use GPT3.5 to get ideas, to build an outline, to extract ideas from different sources, to expand different sections of the outline. I only use GPT4 for the final generation and for making sure the whole text is coherent and unique.

Enjoyed these updates? I’ve got a lot more for you to discover. As an Data Engineer who has been using ChatGPT and LLMs for the past year, and who has built software and mobile Apps using LLMs, I am offering an exclusive and time limited 10% discount on my eBook “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence to help you pass AI Certifications and master prompt engineering – use these links at Apple, Google, or Amazon to access it. I would truly appreciate you leaving a positive review in return.
Enjoy 🙂

Trick to Adding Text in DALL-E 3!

Three text effects to inspire creativity:
Clear Overlay: Incorporates text as a translucent overlay within the image, harmoniously blending with the theme.
Example: A cyberpunk cityscape with the word ‘Future’ as a translucent overlay.
Decal Design: Features text within a decal-like design that stands out yet complements the image’s theme.
Example: A cartoon of a bear family picnic with the word ‘picnic’ in a sticker-like design.
Sphere: Displays text within a speech or thought sphere, distinct but matching the image’s aesthetic.
Example: Imaginative realms with the word “fantasy” in a bubble or an enchanting scene with “OMG” in a speech bubble.

The most remarkable AI releases of 2023
The most remarkable AI releases of 2023

A Daily Chronicle of AI Innovations in December 2023 – Day 27: AI Daily News – December 27th, 2023

🎥 Apple quietly released an open-source multimodal LLM in October
🎵 Microsoft introduces WaveCoder, a fine-tuned Code LLM
💡 Alibaba announces TF-T2V for text-to-video generation

AI-Powered breakthrough in Antibiotics Discovery

👩‍⚕️ Scientists from MIT and Harvard have achieved a groundbreaking discovery in the fight against drug-resistant bacteria, potentially saving millions of lives annually.

➰ Utilizing AI, they have identified a new class of antibiotics through the screening of millions of chemical compounds.

⭕ These newly discovered non-toxic compounds have shown promise in killing drug-resistant bacteria, with their effectiveness further validated in mouse experiments.

🌐 This development is crucial as antibiotic resistance poses a severe threat to global health.

〰 According to the WHO, antimicrobial resistance (AMR) was responsible for over 1.27 million deaths worldwide in 2019 and contributed to nearly 5 million additional deaths.

↗ The economic implications are equally staggering, with the World Bank predicting that antibiotic resistance could lead to over $1 trillion in healthcare costs by 2050 and cause annual GDP losses exceeding $1 trillion by 2030.

🙌This scientific breakthrough not only offers hope for saving lives but also holds the potential to significantly mitigate the looming economic impact of AMR.

Source: https://lnkd.in/dSbG6qcj

Apple quietly released an open-source multimodal LLM in October

Researchers from Apple and Columbia University released an open-source multimodal LLM called Ferret in October 2023. At the time, the release–  which included the code and weights but for research use only, not a commercial license– did not receive much attention.

The chatter increased recently because Apple announced it had made a key breakthrough in deploying LLMs on iPhones– it released two new research papers introducing new techniques for 3D avatars and efficient language model inference. The advancements were hailed as potentially enabling more immersive visual experiences and allowing complex AI systems to run on consumer devices such as the iPhone and iPad.

Why does this matter?

Ferret is Apple’s unexpected entry into the open-source LLM landscape. Also, with open-source models from Mistral making recent headlines and Google’s Gemini model coming to the Pixel Pro and eventually to Android, there has been increased chatter about the potential for local LLMs to power small devices.

Source

Microsoft introduces WaveCoder, a fine-tuned Code LLM

New Microsoft research studies the effect of multi-task instruction data on enhancing the generalization ability of Code LLM. It introduces CodeOcean, a dataset with 20K instruction instances on four universal code-related tasks.

This method and dataset enable WaveCoder, which significantly improves the generalization ability of foundation model on diverse downstream tasks. WaveCoder has shown the best generalization ability among other open-source models in code repair and code summarization tasks, and can maintain high efficiency on previous code generation benchmarks.

Why does this matter?

This research offers a significant contribution to the field of instruction data generation and fine-tuning models, providing new insights and tools for enhancing performance in code-related tasks.

Source

Alibaba announces TF-T2V for text-to-video generation

Diffusion-based text-to-video generation has witnessed impressive progress in the past year yet still falls behind text-to-image generation. One of the key reasons is the limited scale of publicly available data, considering the high cost of video captioning. Instead, collecting unlabeled clips from video platforms like YouTube could be far easier.

Motivated by this, Alibaba Group’s research has come up with a novel text-to-video generation framework, termed TF-T2V, which can directly learn with text-free videos. It also explores its scaling trend. Experimental results demonstrate the effectiveness and potential of TF-T2V in terms of fidelity, controllability, and scalability.

Why does this matter?

Different from most prior works that rely heavily on video-text data and train models on the widely-used watermarked and low-resolution datasets, TF-T2V opens up new possibilities for optimizing with text-free videos or partially paired video-text data, making it more scalable and versatile in widespread scenarios, such as high-definition video generation.

Source

What Else Is Happening in AI on December 27th, 2023

📱Apple’s iPhone design chief enlisted by Jony Ive & Sam Altman to work on AI devices.

Sam Altman and legendary designer Jony Ive are enlisting Apple Inc. veteran Tang Tan to work on a new AI hardware project to create devices with the latest capabilities. Tan will join Ive’s design firm, LoveFrom, which will shape the look and capabilities of the new products. Altman plans to provide the software underpinnings. (Link)

🤖Microsoft Copilot AI gets a dedicated app on Android; no sign-in required.

Microsoft released a new dedicated app for Copilot on Android devices. The free app is available for download today, and an iOS version will launch soon. Unlike Bing, the app focuses solely on delivering access to Microsoft’s AI chat assistant. There’s no clutter from Bing’s search experience or rewards, but you will still find ads. (Link)

🌐Salesforce posts a new AI-enabled commercial promoting “Ask More of AI”.

It is part of its “Ask More of AI” campaign featuring Salesforce pitchman and ambassador Matthew McConaughey. (Link)

📚AI is telling bedtime stories to your kids now.

AI can now tell tales featuring your kids’ favorite characters. However, it’s copyright chaos– and a major headache for parents and guardians. One such story generator called Bluey-GPT begins each session by asking kids their name, age, and a bit about their day, then churns out personalized tales starring Bluey and her sister Bingo. (Link)

🧙‍♂️Researchers have a magic tool to understand AI: Harry Potter.

J.K. Rowling’s Harry Potter is finding renewed relevance in a very different body of literature: AI research. A growing number of researchers are using the best-selling series to test how generative AI systems learn and unlearn certain pieces of information. A notable recent example is a paper titled “Who’s Harry Potter?”. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 26: AI Daily News – December 26th, 2023

🎥 Meta’s 3D AI for everyday devices
💻 ByteDance presents DiffPortrait3D for zero-shot portrait view
🚀 Can a SoTA LLM run on a phone without internet?

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering Guide,” available at Etsy, Shopify, Apple, Google, or Amazon

Meta’s 3D AI for everyday devices

Meta research and Codec Avatars Lab (with MIT) have proposed PlatoNeRF,  a method to recover scene geometry from a single view using two-bounce signals captured by a single-photon lidar. It reconstructs lidar measurements with NeRF, which enables physically-accurate 3D geometry to be learned from a single view.

The method outperforms related work in single-view 3D reconstruction, reconstructs scenes with fully occluded objects, and learns metric depth from any view. Lastly, the research demonstrates generalization to varying sensor parameters and scene properties.

Why does this matter?

The research is a promising direction as single-photon lidars become more common and widely available in everyday consumer devices like phones, tablets, and headsets.

Source

ByteDance presents DiffPortrait3D for zero-shot portrait view

ByteDance research presents DiffPortrait3D, a novel conditional diffusion model capable of generating consistent novel portraits from sparse input views.

Given a single portrait as reference (left), DiffPortrait3D is adept at producing high-fidelity and 3d-consistent novel view synthesis (right). Notably, without any finetuning, DiffPortrait3D is universally effective across a diverse range of facial portraits, encompassing, but not limited to, faces with exaggerated expressions, wide camera views, and artistic depictions.

Why does this matter?

The framework opens up possibilities for accessible 3D reconstruction and visualization from a single picture.

Source

Can a SoTA LLM run on a phone without internet?

Amidst the rapid evolution of generative AI, on-device LLMs offer solutions to privacy, security, and connectivity challenges inherent in cloud-based models.

New research at Haltia, Inc. explores the feasibility and performance of on-device large language model (LLM) inference on various Apple iPhone models. Leveraging existing literature on running multi-billion parameter LLMs on resource-limited devices, the study examines the thermal effects and interaction speeds of a high-performing LLM across different smartphone generations. It presents real-world performance results, providing insights into on-device inference capabilities.

It finds that newer iPhones can handle LLMs, but achieving sustained performance requires further advancements in power management and system integration.

Why does this matter?

Running LLMs on smartphones or even other edge devices has significant advantages. This research is pivotal for enhancing AI processing on mobile devices and opens avenues for privacy-centric and offline AI applications.

Source

What Else Is Happening in AI on December 26th, 2023

📰Apple reportedly wants to use the news to help train its AI models.

Apple is talking with some big news publishers about licensing their news archives and using that information to help train its generative AI systems in “multiyear deals worth at least $50M. It has been in touch with publications like Condé Nast, NBC News, and IAC. (Link)

🤖Sam Altman-backed Humane to ship ChatGPT-powered AI Pin starting March 2024.

Humane plans to prioritize the dispatch of products to customers with priority orders. Orders will be shipped in chronological order by whoever placed their order first. The Ai Pin, with the battery booster, will cost $699. A monthly charge of $24 for a Humane subscription offers cellular connectivity, a dedicated number, and data coverage. (Link)

💰OpenAI seeks fresh funding round at a valuation at or above $100 billion.

Investors potentially involved have been included in preliminary discussions. Details like the terms, valuation, and timing of the funding round are yet to finalize and could still change. If the round happens, OpenAI would become the second-most valuable startup in the US, behind Elon Musk’s SpaceX. (Link)

🔍AI companies are required to disclose copyrighted training data under a new bill.

Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act– filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) –  would direct the Federal Trade Commission (FTC) to work with the NIST to establish rules. (Link)

🔬AI discovers a new class of antibiotics to kill drug-resistant bacteria.

AI has helped discover a new class of antibiotics that can treat infections caused by drug-resistant bacteria. This could help in the battle against antibiotic resistance, which was responsible for killing more than 1.2 million people in 2019– a number expected to rise in the coming decades. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 25: AI Daily News – December 25th, 2023

📚 Why Incumbents LOVE AI by Shomik Ghosh
🎥 Tutorial: How to make and share custom GPTs by Charlie Guo
🚀 Startup productivity in the age of AI by jason@calacanis.com
💡 Practical Tips for Finetuning LLMs Using LoRA by Sebastian Raschka, PhD
🔧 The Interface Era of AI by Nathan Lambert
🧮 “Math is hard” — if you are an LLM – and why that matters by Gary Marcus
🎯 OpenAI’s alignment problem by Casey Newton
👔 In Praise of Boring AI by Ethan Mollick
🎭 How to create consistent characters in Midjourney by Linus Ekenstam
📱 The Mobile Revolution vs. The AI Revolution by Rex Woodbury

AI Unraveled
AI Unraveled

AI Unraveled:

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon

Why Incumbents LOVE AI

Since the release of ChatGPT, we have seen an explosion of startups like Jasper, Writer AI, Stability AI, and more.

Far from it: Adobe released Firefly, Intercom launched Fin, heck even Coca-Cola embraced stable diffusion and made a freaking incredible ad (below)!

So why are incumbents and enterprises able to move so quickly? Here are some brief thoughts on it by Shomik Ghosh

  • LLMs are not a new platform: Unlike massive tech AND org shifts like Mobile or Cloud, adopting AI doesn’t entail a massive tech or organizational overhaul. It is an enablement shift (with data enterprises already have).
  • Talent retention is hard…except when AI is involved: AI is a retention tool. For incumbents, the best thing to happen is to be able to tell the best engineers who have been around for a while that they get to work on something new.

The article also talks about the opportunities ahead.

Source

Tutorial: How to make and share custom GPTs

This tutorial by Charlie Guo explains how to create and share custom GPTs (Generative Pre-Trained Transformers). GPTs are pre-packaged versions of ChatGPT with customizations and additional features. They can be used for various purposes, such as creative writing, coloring book generation, negotiation, and recipe building.

GPTs are different from plugins in that they offer more capabilities and can be chosen at the start of a conversation. The GPT Store, similar to an app store, will soon be launched by OpenAI, allowing users to browse and save publicly available GPTs. The tutorial provides step-by-step instructions on building a GPT and publishing it.

Source

Example: MedumbaGPT

Creating a custom GPT model to help people learn the Medumba language, a Bantu language spoken in Cameroon, is an exciting project. Here’s a step-by-step plan to bring this idea to fruition:

1. Data Collection and Preparation

  • Gather Data: Compile a comprehensive dataset of the Medumba language, including common phrases, vocabulary, grammar rules, and conversational examples. Ensure the data is accurate and diverse.
  • Data Processing: Format and preprocess the data for model training. This might include translating phrases to and from Medumba, annotating grammatical structures, and organizing conversational examples.

2. Model Training

  • Select a Base Model: Choose a suitable base GPT model. For a language-learning application, a model that excels in natural language understanding and generation would be ideal.
  • Fine-Tuning: Use your Medumba dataset to fine-tune the base GPT model. This process involves training the model on your specific dataset to adapt it to the nuances of the Medumba language.

3. Application Development

  • Web Interface: Develop a user-friendly web interface where users can interact with the GPT model. This interface should be intuitive and designed for language learning.
  • Features: Implement features like interactive dialogues, language exercises, translations, and grammar explanations. Consider gamification elements to make learning engaging.

4. Integration and Deployment

  • Integrate GPT Model: Integrate the fine-tuned GPT model with the web application. Ensure the model’s responses are accurate and appropriate for language learners.
  • Deploy the Application: Choose a reliable cloud platform for hosting the application. Ensure it’s scalable to handle varying user loads.

5. Testing and Feedback

  • Beta Testing: Before full launch, conduct beta testing with a group of users. Gather feedback on the application’s usability and the effectiveness of the language learning experience.
  • Iterative Improvement: Use feedback to make iterative improvements to the application. This might involve refining the model, enhancing the user interface, or adding new features.

6. Accessibility and Marketing

  • Make It Accessible: Ensure the application is accessible to your target audience. Consider mobile responsiveness and multilingual support.
  • Promotion: Use social media, language learning forums, and community outreach to promote your application. Collaborating with language learning communities can also help in gaining visibility.

7. Maintenance and Updates

  • Regular Updates: Continuously update the application based on user feedback and advancements in AI. This includes updating the language model and the application features.
  • Support & Maintenance: Provide support for users and maintain the infrastructure to ensure smooth operation.

Technical and Ethical Considerations

  • Data Privacy: Adhere to data privacy laws and ethical guidelines, especially when handling user data.
  • Cultural Sensitivity: Ensure the representation of the Medumba language and culture is respectful and accurate.

Collaboration and Funding

  • Consider collaborating with linguists, language experts, and AI specialists.
  • Explore funding options like grants, crowdfunding, or partnerships with educational institutions.

Startup productivity in the age of AI: automate, deprecate, delegate (A.D.D.)

The article by jason@calacanis.com discusses the importance of implementing the A.D.D. framework (automate, deprecate, delegate) in startups to increase productivity in the age of AI. It emphasizes the need to automate tasks that can be done with software, deprecate tasks that have little impact, and delegate tasks to lower-salaried individuals.

The article also highlights the importance of embracing the automation and delegation of work, as it allows for higher-level and more meaningful work to be done. The A.D.D. framework is outlined with steps on how to implement it effectively. The article concludes by emphasizing the significance of this framework in the current startup landscape.

Source

Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)

LoRA is among the most widely used and effective techniques for efficiently training custom LLMs. For those interested in open-source LLMs, it’s an essential technique worth familiarizing oneself with.

In this insightful article, Sebastian Raschka, PhD discusses the primary lessons derived from his experiments. Additionally, he addresses some of the frequently asked questions related to the topic. If you are interested in finetuning custom LLMs, these insights will save you some time in “the long run” (no pun intended).

Source

The interface era of AI

In this article, the author Nathan Lambert explains the era of AI interfaces, where evaluation is about the collective abilities of AI models tested in real open-ended use. Vibes-based evaluations and secret prompts are becoming popular among researchers to assess models. Deploying and interaction with models are crucial steps in the workflow, and engineering prowess is essential for successful research.

Chat-based AI interfaces are gaining prominence over search, and they may even integrate product recommendations into model tuning. The future will see AI-powered hardware devices, such as smart glasses and AI pins, that will revolutionize interactions with AI. Apple’s AirPods with cameras could be a game-changer in this space.

Source

A Daily Chronicle of AI Innovations in December 2023 – Day 23: AI Daily News – December 23rd, 2023

🍎 Apple wants to use the news to help train its AI models

💸 OpenAI in talks to raise new funding at $100 bln valuation

⚖️ AI companies would be required to disclose copyrighted training data under new bill

🚫 80% of Americans think presenting AI content as human-made should be illegal

🎃 Microsoft just paid $76 million for a Wisconsin pumpkin farm

🧮 Google DeepMind’s LLM solves complex math
📘 OpenAI released its Prompt Engineering Guide
🤫 ByteDance secretly uses OpenAI’s Tech
🔥 OpenAI’s new ‘Preparedness Framework’ to track AI risks
🚀 Google Research’s new approach to improve performance of LLMs
🖼️ NVIDIA’s new GAvatar creates realistic 3D avatars
🎥 Google’s VideoPoet is the ultimate all-in-one video AI
🎵 Microsoft Copilot turns your ideas into songs with Suno
💡 Runway introduces text-to-speech and video ratios for Gen-2
🎬 Alibaba’s DreaMoving produces HQ customized human videos
💻 Apple optimises LLMs for Edge use cases
🚀 Nvidia’s biggest Chinese competitor unveils cutting-edge AI GPUs
🧚‍♂️ Meta’s Fairy can generate videos 44x faster
🤖 NVIDIA presents new text-to-4D model
🌟 Midjourney V6 has enhanced prompting and coherence

 Apple wants to use the news to help train its AI models

  • Apple is in talks with major publishers like Condé Nast and NBC News to license news archives for training its AI, with potential deals worth $50 million.
  • Publishers show mixed reactions, concerned about legal liabilities from Apple’s use of their content, while some are positive about the partnership.
  • While Apple has been less noticeable in AI advancements compared to OpenAI and Google, it’s actively investing in AI research, including improving Siri and other AI features for future iOS releases.
  • Source

💸 OpenAI in talks to raise new funding at $100 bln valuation

  • OpenAI is in preliminary talks for a new funding round at a valuation of $100 billion or more, potentially becoming the second-most valuable startup in the U.S. after SpaceX, with details yet to be finalized.
  • The company is also completing a separate tender offer allowing employees to sell shares at an $86 billion valuation, reflecting its rapid growth spurred by the success of ChatGPT and significant interest in AI technology.
  • Amidst this growth, OpenAI is discussing raising $8 to $10 billion for a new chip venture, aiming to compete with Nvidia in the AI chip market, even as it navigates recent leadership changes and strategic partnerships.
  • Source

⚖️ AI companies would be required to disclose copyrighted training data under new bill

  • The AI Foundation Model Transparency Act requires foundation model creators to disclose their sources of training data to the FTC and align with NIST’s AI Risk Management Framework, among other reporting requirements.
  • The legislation emphasizes training data transparency and includes provisions for AI developers to report on “red teaming” efforts, model limitations, and computational power used, addressing concerns about copyright, bias, and misinformation.
  • The bill seeks to establish federal rules for AI transparency and is pending committee assignment and discussion amidst a busy election campaign season.
  • Source

 80% of Americans think presenting AI content as human-made should be illegal

  • According to a survey by the AI Policy Institute, 80% of Americans believe it should be illegal to present AI-generated content as human-made, reflecting broad concern over ethical implications in journalism and media.
  • Despite Sports Illustrated’s denial of using AI for content creation, the public’s overwhelming disapproval suggests a significant demand for transparency and proper disclosure in AI-generated content.
  • The survey also indicated strong bipartisan agreement on the ethical concerns and legal implications of using AI in media, with 84% considering the deceptive use of AI unethical and 80% supporting its illegalization.
  • Source

🧮 Google DeepMind’s LLM solves complex math

Google DeepMind’s latest Large Language Model (LLM) showcased its remarkable capability by solving intricate mathematical problems. This advancement demonstrates the potential of LLMs in complex problem-solving and analytical tasks.

📘 OpenAI released its Prompt Engineering Guide

OpenAI released a comprehensive Prompt Engineering Guide, offering valuable insights and best practices for effectively interacting with AI models. This guide is a significant resource for developers and researchers aiming to maximize the potential of AI through optimized prompts.

🤫 ByteDance secretly uses OpenAI’s Tech

Reports emerged that ByteDance, the parent company of TikTok, has been clandestinely utilizing OpenAI’s technology. This revelation highlights the widespread and sometimes undisclosed adoption of advanced AI tools in the tech industry.

🔥 OpenAI’s new ‘Preparedness Framework’ to track AI risks

OpenAI introduced a ‘Preparedness Framework’ designed to monitor and assess risks associated with AI developments. This proactive measure aims to ensure the safe and ethical progression of AI technologies.

🚀 Google Research’s new approach to improve performance of LLMs

Google Research unveiled a novel approach aimed at enhancing the performance of Large Language Models. This breakthrough promises to optimize LLMs, making them more efficient and effective in processing and generating language.

🖼️ NVIDIA’s new GAvatar creates realistic 3D avatars

NVIDIA announced its latest innovation, GAvatar, a tool capable of creating highly realistic 3D avatars. This technology represents a significant leap in digital imagery, offering new possibilities for virtual reality and digital representation.

🎥 Google’s VideoPoet is the ultimate all-in-one video AI

Google introduced VideoPoet, a comprehensive AI tool designed to revolutionize video creation and editing. VideoPoet combines multiple functionalities, streamlining the video production process with AI-powered efficiency.

🎵 Microsoft Copilot turns your ideas into songs with Suno

Microsoft Copilot, in collaboration with Suno, unveiled an AI-powered feature that transforms user ideas into songs. This innovative tool opens new creative avenues for music production and songwriting.

💡 Runway introduces text-to-speech and video ratios for Gen-2

Runway introduced new features in its Gen-2 version, including advanced text-to-speech capabilities and customizable video ratios. These enhancements aim to provide users with more creative control and versatility in content creation.

🎬 Alibaba’s DreaMoving produces HQ customized human videos

Alibaba’s DreaMoving project marked a significant advancement in AI-generated content, producing high-quality, customized human videos. This technology heralds a new era in personalized digital media.

💻 Apple optimizes LLMs for Edge use cases

Apple announced optimizations to its Large Language Models specifically for Edge use cases. This development aims to enhance AI performance in Edge computing, offering faster and more efficient AI processing closer to the data source.

🚀 Nvidia’s biggest Chinese competitor unveils cutting-edge AI GPUs

Nvidia’s leading Chinese competitor made a bold move by unveiling its own range of cutting-edge AI GPUs. This development signals increasing global competition in

A Daily Chronicle of AI Innovations in December 2023 – Day 22: AI Daily News – December 22nd, 2023

🎥 Meta’s Fairy can generate videos 44x faster
🤖 NVIDIA presents new text-to-4D model
🌟 Midjourney V6 has enhanced prompting and coherence

🚄 Hyperloop One is shutting down

🤖 Google might already be replacing some human workers with AI

🎮 British teenager behind GTA 6 hack receives indefinite hospital order

👺 Intel CEO says Nvidia was ‘extremely lucky’ to become the dominant force in AI

🔮 Microsoft is stopping its Windows mixed reality platform

Meta’s Fairy can generate videos 44x faster

GenAI Meta research has introduced Fairy, a minimalist yet robust adaptation of image-editing diffusion models, enhancing them for video editing applications. Fairy not only addresses limitations of previous models, including memory and processing speed. It also improves temporal consistency through a unique data augmentation strategy.

Remarkably efficient, Fairy generates 120-frame 512×384 videos (4-second duration at 30 FPS) in just 14 seconds, outpacing prior works by at least 44x. A comprehensive user study, involving 1000 generated samples, confirms that the approach delivers superior quality, decisively outperforming established methods.

Why does this matter?

Fairy offers a transformative approach to video editing, building on the strengths of image-editing diffusion models. Moreover, it tackles the memory and processing speed constraints observed in preceding models along with quality. Thus, it firmly establishes its superiority, as further corroborated by the extensive user study.

Source

NVIDIA presents a new text-to-4D model

NVIDIA research presents Align Your Gaussians (AYG) for high-quality text-to-4D dynamic scene generation. It can generate diverse, vivid, detailed and 3D-consistent dynamic 4D scenes, achieving state-of-the-art text-to-4D performance.

AYG uses dynamic 3D Gaussians with deformation fields as its dynamic 4D representation. An advantage of this representation is its explicit nature, which allows us to easily compose different dynamic 4D assets in large scenes. AYG’s dynamic 4D scenes are generated through score distillation, leveraging composed text-to-image, text-to-video and 3D-aware text-to-multiview-image latent diffusion models.

Why does this matter?

AYG can open up promising new avenues for animation, simulation, digital content creation, and synthetic data generation, where AYG takes a step beyond the literature on text-to-3D synthesis and also captures our world’s rich temporal dynamics.

Source

Midjouney V6 has improved prompting and image coherence

Midjourney has started alpha-testing its V6 models. Here is what’s new in MJ V6:

  • Much more accurate prompt following as well as longer prompts
  • Improved coherence, and model knowledge
  • Improved image prompting and remix
  • Minor text drawing ability
  • Improved upscalers, with both ‘subtle‘ and ‘creative‘ modes (increases resolution by 2x)

An entirely new prompting method had been developed, so users will need to re-learn how to prompt.

Why does this matter?

By the looks of it on social media, users seem to like version 6 much better. Midjourney’s prompting had long been somewhat esoteric and technical, which now changes. Plus, in-image text is something that has eluded Midjourney since its release in 2022 even as other rival AI image generators such as OpenAI’s DALL-E 3 and Ideogram had launched this type of feature.

Source

Google might already be replacing some human workers with AI

  • Google is considering the use of AI to “optimize” its workforce, potentially replacing human roles in its large customer sales unit with AI tools that automate tasks previously done by employees overseeing relationships with major advertisers.
  • The company’s Performance Max tool, enhanced with generative AI, now automates ad creation and placement across various platforms, reducing the need for human input and significantly increasing efficiency and profit margins.
  • While the exact impact on Google’s workforce is yet to be determined, a significant number of the 13,500 people devoted to sales work could be affected, with potential reassignments or layoffs expected to be announced in the near future.
  • Source

👺 Intel CEO says Nvidia was ‘extremely lucky’ to become the dominant force in AI

  • Intel CEO Pat Gelsinger suggests Nvidia’s AI dominance is due to luck and Intel’s inactivity, while highlighting past mistakes like canceling the Larrabee project as missed opportunities.
  • Gelsinger aims to democratize AI at Intel with new strategies like neural processing units in CPUs and open-source software, intending to revitalize Intel’s competitive edge.
  • Nvidia’s Bryan Catanzaro rebuts Gelsinger, attributing Nvidia’s success to clear vision and execution rather than luck, emphasizing the strategic differences between the companies.
  • Source

🔮 Microsoft is stopping its Windows mixed reality platform

  • Microsoft has ended the “mixed reality” feature in Windows which combined augmented and virtual reality capabilities.
  • The mixed reality portal launched in 2017 is being removed from Windows, affecting users with VR headsets.
  • Reports suggest Microsoft may also discontinue its augmented reality headset, HoloLens, after cancelling plans for a third version.
  • Source

2024: 12 predictions for AI, including 6 moonshots

  1. MLMs – Immerse Yourself in Multimodal Generation: The progression towards fully generative multimodal models is accelerating. 2022 marked a breakthrough in text generation, while 2023 witnessed the rise of Gemini-like models that encompass multimodal capabilities. By 2024, we envision a future where these models will seamlessly generate music, videos, text, and construct immersive narratives lasting several minutes, all at an accessible cost and with quality comparable to 4K cinema. Brace yourself Multimedia Large models are coming. likelihood 8/10.
  2. SLMs- Going beyond Search and Generative dichotomy: LLMs and search are two facets of a unified cognitive process. LLMs utilise search results as dynamic input for their prompts, employing a retrieval-augmented generation (RAG) mechanism. Additionally, they leverage search to validate their generated text. Despite this symbiotic relationship, LLMs and search remain distinct entities, with search acting as an external and resource-intensive scaffolding for LLMs. Is there a more intelligent approach that seamlessly integrates these two components into a unified system? The word is ready for Search large models or, shortly, SLMs. likelihood 8/10.
  3. RLMs – Relevancy is the king, hallucinations are bad: LLMs have been likened to dream machines which can hallucinate, and this capability it has been considered not a bug but a ‘feature’. I disagree: while hallucinations can occasionally trigger serendipitous discoveries, it’s crucial to distinguish between relevant and irrelevant information. We can expect to see an increasing incorporation of relevance signals into transformers, echoing the early search engines that began utilising link information such as PageRank to enhance the quality of results. For LLMs, the process would be analogous, with the only difference being that the generated information is not retrieved but created. The era of Relevant large models is upon us. likelihood 10/10.
  4. LinWindow – Going beyond quadratic context window: The transformer architecture’s attention mechanism employs a context window, which inherently presents a quadratic computational complexity challenge. A larger context window would significantly enhance the ability to incorporate past chat histories and dynamically inject content at prompt time. While several approaches have been proposed to alleviate this complexity by employing approximation schemes, none have matched the performance of the quadratic attention mechanism. Is there a more intelligent alternative approach? (Mamba is a promising paper) In short, we need LinWindow. likelihood 6/10.
  5. AILF – AI Lingua Franca: AILF As the field of artificial intelligence (AI) continues to evolve at an unprecedented pace, we are witnessing a paradigm shift from siloed AI models to unified AI platforms. Much like Kubernetes emerged as the de facto standard for container orchestration, could a single AI platform emerge as the lingua franca of AI, facilitating seamless integration and collaboration across various AI applications and domains? likelihood 8/10.
  6. CAIO – Chief AI Officer (CAIO): The role of the CAIO will be rapidly gaining prominence as organisations recognise the transformative potential of AI. As AI becomes increasingly integrated into business operations, the need for a dedicated executive to oversee and guide AI adoption becomes more evident. The CAIO will serve as the organisation’s chief strategist for AI, responsible for developing a comprehensive AI strategy that aligns with the company’s overall business goals. They will also be responsible for overseeing the implementation and deployment of AI initiatives across the organization, ensuring that AI is used effectively and responsibly. In addition, they will also play a critical role in managing the organisation’s AI ethics and governance framework. likelihood 10/10.
  7. [Moonshot] InterAI – Models are connected everywhere: With the advent of Gemini, we’ve witnessed a surge in the development of AI models tailored for specific devices, ranging from massive cloud computing systems to the mobile devices held in our hands. The next stage in this evolution is to interconnect these devices, forming a network of intelligent AI entities that can collaborate and determine the most appropriate entity to provide a specific response in an economical manner. Imagine a federated AI system with routing and selection mechanisms, distributed and decentralised. In essence, InterAI is the future of the interNet. likelihood 3/10.
  8. [Moonshot] NextLM – Beyond Transformers and Diffusion: The transformer architecture, introduced in a groundbreaking 2017 paper from Google, reigns supreme in the realm of AI technology today. Gemini, Bard, PaLM, ChatGPT, Midjourney, GitHub Copilot, and other groundbreaking generative AI models and products are all built upon the foundation of transformers. Diffusion models, employed by Stability and Google ImageGen for image, video, and audio generation, represent another formidable approach. These two pillars form the bedrock of modern generative AI. Could 2024 witness the emergence of an entirely new paradigm? likelihood 3/10.
  9. [Moonshot] NextLearn: In 2022, I predicted the emergence of a novel learning algorithm, but that prediction did not materialize in 2023. However, Geoffrey Hinton’s Forward-Forward algorithm presented a promising approach that deviates from the traditional backpropagation method by employing two forward passes, one with real data and the other with synthetic data generated by the network itself. While further research is warranted, Forward-Forward holds the potential for significant advancements in AI. More extensive research is required – likelihood 2/10.
  10. [Moonshot] FullReasoning – LLMs are proficient at generating hypotheses, but this only addresses one aspect of reasoning. The reasoning process encompasses at least three phases: hypothesis generation, hypothesis testing, and hypothesis refinement. During hypothesis generation, the creative phase unfolds, including the possibility of hallucinations. During hypothesis testing, the hypotheses are validated, and those that fail to hold up are discarded. Optionally, hypotheses are refined, and new ones emerge as a result of validation. Currently, language models are only capable of the first phase. Could we develop a system that can rapidly generate numerous hypotheses in an efficient manner, validate them, and then refine the results in a cost-effective manner? CoT, ToT, and implicit code executionrepresent initial steps in this direction. A substantial body of research is necessary – likelihood 2/10.
  11. [Moonshot] NextProcessor – The rapid advancement of artificial intelligence (AI) has placed a significant strain on the current computing infrastructure, particularly GPUs (graphics processing units) and TPUs (Tensor Processing Units). As AI models become increasingly complex and data-intensive, these traditional hardware architectures are reaching their limits. To accommodate the growing demands of AI, a new paradigm of computation is emerging that transcends the capabilities of GPUs and TPUs. This emerging computational framework, often referred to as “post-Moore” computing, is characterized by a departure from the traditional von Neumann architecture, which has dominated computing for decades. Post-Moore computing embraces novel architectures and computational principles that aim to address the limitations of current hardware and enable the development of even more sophisticated AI models. The emergence of these groundbreaking computing paradigms holds immense potential to revolutionise the field of AI, enabling the development of AI systems that are far more powerful, versatile, and intelligent than anything we have witnessed to date. likelihood 3/10
  12. [Moonshot] QuanTransformer – The Transformer architecture, a breakthrough in AI, has transformed the way machines interact with and understand language. Could the merging of Transformer with Quantum Computing provide an even greater leap forward in our quest for artificial intelligence that can truly understand the world around us? QSANis a baby step in that direction. likelihood 2/10.

As we look ahead to 2024, the field of AI stands poised to make significant strides, revolutionizing industries and shaping our world in profound ways. The above 12 predictions for AI in 2024, including 6 ambitious moonshot projects could push the boundaries of what we thought possible paving the way to more powerful AIs. What are your thoughts?

Source: Antonio Giulli

Large language models often display harmful biases and stereotypes, which may be particularly concerning in high-risk fields such as medicine and health.

A recent large-scale study (https://lnkd.in/eJr7bZxt) published in the Lancet Digital Health robustly showed biases for a variety of important medical use cases OpenAI’s flagship GPT-4 model. I was invited to comment on the article to highlight possible mitigation strategies (https://lnkd.in/eYgaUkzm).

The bottom line: this problem persists even in large-scale high-performance models, and a variety of approaches including new technological innovations will be needed to make these systems safe for clinical use.

AI Robot chemist discovers molecule to make oxygen on Mars

Source: (Space.com and USA Today)

Quick Overview:

  • Calculating the 3.7 million molecules that could be created from the six different metallic elements in Martian rocks may have been difficult without the help of AI.

  • Any crewed journey to Mars will require a method of creating and maintaining sufficient oxygen levels to sustain human life; instead of bringing enormous oxygen tanks, finding a technique to manufacture oxygen on Mars is a more beneficial concept.

  • They plan to extract water from Martian ice, which includes a large amount of water that is then able to be divided into oxygen and hydrogen.

What Else Is Happening in AI on December 22nd, 2023

🆕Google AI research has developed ‘Hold for Me’ and a Magic Eraser update.

It is an AI-driven technology that processes audio directly on your Pixel device and can determine whether you’ve been placed on hold or if someone has picked up the call. Also, Magic Eraser now uses gen AI to fill in details when users remove unwanted objects from photos. (Link)

💬Google is rolling out ‘AI support assistant’ chatbot to provide product help.

When visiting the support pages for some Google products, now you’ll encounter a “Hi, I’m a new Al support assistant. Chat with me to find answers and solve account issues” dialog box in the bottom-right corner of your screen. (Link)

🏆Dictionary selected “Hallucinate” as its 2023 Word of the Year.

This points to its AI context, meaning “to produce false information and present it as fact.” AI hallucinations are important for the broader world to understand. (Link)

❤️Chatty robot helps seniors fight loneliness through AI companionship.

Robot ElliQ, whose creators, Intuition Robotics, and senior assistance officials say it is the only device using AI specifically designed to lessen the loneliness and isolation experienced by many older Americans. (Link)

📉Google Gemini Pro falls behind free ChatGPT, says study.

A recent study by Carnegie Mellon University (CMU) shows that Google’s latest large language model, Gemini Pro, lags behind GPT-3.5 and far behind GPT-4 in benchmarks. The results contradict the information provided by Google at the Gemini presentation. This highlights the need for neutral benchmarking institutions or processes. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 21: AI Daily News – December 21st, 2023

🎥 Alibaba’s DreaMoving produces HQ customized human videos
💻 Apple optimises LLMs for Edge use cases
🚀 Nvidia’s biggest Chinese competitor unveils cutting-edge AI GPUs

🔬 Scientists discover first new antibiotics in over 60 years using AI

🧠 The brain-implant company going for Neuralink’s jugular

🛴 E-scooter giant Bird files for bankruptcy

🤖 Apple wants AI to run directly on its hardware instead of in the cloud

🍎 Apple reportedly plans Vision Pro launch by February

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon

Alibaba’s DreaMoving produces HQ customized human videos

Alibaba’s Animate Anyone saga continues, now with the release of DreaMoving by its research. DreaMoving is a diffusion-based, controllable video generation framework to produce high-quality customized human videos.

It can generate high-quality and high-fidelity videos given guidance sequence and simple content description, e.g., text and reference image, as input. Specifically, DreaMoving demonstrates proficiency in identity control through a face reference image, precise motion manipulation via a pose sequence, and comprehensive video appearance control prompted by a specified text prompt. It also exhibits robust generalization capabilities on unseen domains.

Why does this matter?

DreaMoving sets a new standard in the field after Animate Anyone, facilitating the creation of realistic human videos/animations. With video content ruling social and digital landscapes, such frameworks will play a pivotal role in shaping the future of content creation and consumption. Instagram and Titok reels can explode with this since anyone can create short-form videos, potentially threatening influencers.

Source

Apple optimises LLMs for Edge use cases

Apple has published a paper, ‘LLM in a flash: Efficient Large Language Model Inference with Limited Memory’, outlining a method for running LLMs on devices that surpass the available DRAM capacity. This involves storing the model parameters on flash memory and bringing thn-feature-via-suno-integration/em on demand to DRAM.

The methods here collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively.

Why does this matter?

This research is significant as it paves the way for effective inference of LLMs on devices with limited memory. And also because Apple plans to integrate GenAI capabilities into iOS 18.

Apart from Apple, Samsung recently introduced Gauss, its own on-device LLM. Google announced its on-device LLM, Gemini Nano, which is set to be introduced in the upcoming Google Pixel 8 phones. It is evident that on-device LLMs are becoming a focal point of AI innovation.

Source

Nvidia’s biggest Chinese competitor unveils cutting-edge AI GPUs

Chinese GPU manufacturer Moore Threads announced the MTT S4000, its latest graphics card for AI and data center compute workloads. It’s brand-new flagship will feature in the KUAE Intelligent Computing Center, a data center containing clusters of 1,000 S4000 GPUs each.

Moore Threads is also partnering with many other Chinese companies, including Lenovo, to get its KUAE hardware and software ecosystem off the ground.

Why does this matter?

Moore Threads claims KUAE supports mainstream LLMs like GPT and frameworks like (Microsoft) DeepSpeed. Although Moore Threads isn’t positioned to compete with the likes of Nvidia, AMD, or Intel any time soon, this might not be a critical requirement for China. Given the U.S. chip restrictions, Moore Threads might save China from having to reinvent the wheel.

Source

🔬 Scientists discover first new antibiotics in over 60 years using AI

  • Scientists have discovered a new class of antibiotics capable of combating drug-resistant MRSA bacteria, marking the first significant breakthrough in antibiotic discovery in 60 years, thanks to advanced AI-driven deep learning models.
  • The team from MIT employed an enlarged deep learning model and extensive datasets to predict the activity and toxicity of new compounds, leading to the identification of two promising antibiotic candidates.
  • These new findings, which aim to open the black box of AI in pharmaceuticals, could significantly impact the fight against antimicrobial resistance, as nearly 35,000 people die annually in the EU from such infections.
  • Source

🤖 Apple wants AI to run directly on its hardware instead of in the cloud

  • Apple is focusing on running large language models on iPhones to improve AI without relying on cloud computing.
  • Their research suggests potential for faster, offline AI response and enhanced privacy due to on-device processing.
  • Apple’s work could lead to more sophisticated virtual assistants and new AI features in smartphones.
  • Source

AI Death Predictor Calculator: A Glimpse into the Future

This innovative AI death predictor calculator aims to forecast an individual’s life trajectory, offering insights into life expectancy and financial status with an impressive 78% accuracy rate. Developed by leveraging data from Danish health and demographic records for six million people, Life2vec takes into account a myriad of factors, ranging from medical history to socio-economic conditions. Read more here

How Life2vec Works

Accuracy Unveiled

Life2vec’s accuracy is a pivotal aspect that sets it apart. Rigorous testing on a diverse group of individuals aged between 35 and 65, half of whom passed away between 2016 and 2020, showcased the tool’s predictive prowess. The calculator successfully anticipated who would live and who would not with an accuracy rate of 78%, underscoring its potential as a reliable life forecasting tool.

Bill Gates: AI is about to supercharge the innovation pipeline in 2024

Bill Gates: AI is about to supercharge the innovation pipeline in 2024
Bill Gates: AI is about to supercharge the innovation pipeline in 2024

Some key takeaways:

  • The greatest impact of AI will likely be in drug discovery and combating antibiotic resistance.

  • AI has the potential to bring a personalized tutor to every student around the world.

  • High-income countries like the US are 18–24 months away from significant levels of AI use by the general population.

  • Gates believes that AI will help reduce inequities around the world by improving outcomes in health, education and other areas.

My work has always been rooted in a core idea: Innovation is the key to progress. It’s why I started Microsoft, and it’s why Melinda and I started the Gates Foundation more than two decades ago.

Innovation is the reason our lives have improved so much over the last century. From electricity and cars to medicine and planes, innovation has made the world better. Today, we are far more productive because of the IT revolution. The most successful economies are driven by innovative industries that evolve to meet the needs of a changing world.

My favorite innovation story, though, starts with one of my favorite statistics: Since 2000, the world has cut in half the number of children who die before the age of five.

How did we do it? One key reason was innovation. Scientists came up with new ways to make vaccines that were faster and cheaper but just as safe. They developed new delivery mechanisms that worked in the world’s most remote places, which made it possible to reach more kids. And they created new vaccines that protect children from deadly diseases like rotavirus.

In a world with limited resources, you have to find ways to maximize impact. Innovation is the key to getting the most out of every dollar spent. And artificial intelligence is about to accelerate the rate of new discoveries at a pace we’ve never seen before.

One of the biggest impacts so far is on creating new medicines. Drug discovery requires combing through massive amounts of data, and AI tools can speed up that process significantly. Some companies are already working on cancer drugs developed this way. But a key priority of the Gates Foundation in AI is ensuring these tools also address health issues that disproportionately affect the world’s poorest, like AIDS, TB, and malaria.

We’re taking a hard look at the wide array of AI innovation in the pipeline right now and working with our partners to use these technologies to improve lives in low- and middle-income countries.

In the fall, I traveled to Senegal to meet with some of the incredible researchers doing this work and to celebrate the 20th anniversary of the foundation’s Grand Challenges initiative. When we first launched Grand Challenges—the Gates Foundation’s flagship innovation program—it had a single goal: Identify the biggest problems in health and give grants to local researchers who might solve them. We asked innovators from developing countries how they would address health challenges in their communities, and then we gave them the support to make it happen.

Many of the people I met in Senegal were taking on the first-ever AI Grand Challenge. The foundation didn’t have AI projects in mind when we first set that goal back in 2003, but I’m always inspired by how brilliant scientists are able to take advantage of the latest technology to tackle big problems.

It was great to learn from Amrita Mahale about how the team at ARMMAN is developing an AI chatbot to improve health outcomes for pregnant women.

Much of their work is in the earliest stages of development—there’s a good chance we won’t see any of them used widely in 2024 or even 2025. Some might not even pan out at all. The work that will be done over the next year is setting the stage for a massive technology boom later this decade.

Still, it’s impressive to see how much creativity is being brought to the table. Here is a small sample of some of the most ambitious questions currently being explored:

  • Can AI combat antibiotic resistance? Antibiotics are magical in their ability to end infection, but if you use them too often, pathogens can learn how to ignore them. This is called antimicrobial resistance, or AMR, and it is a huge issue around the world—especially in Africa, which has the highest mortality rates from AMR. Nana Kofi Quakyi from the Aurum Institute in Ghana is working on an AI-powered tool that helps health workers prescribe antibiotics without contributing to AMR. The tool will comb through all the available information—including local clinical guidelines and health surveillance data about which pathogens are currently at risk of developing resistance in the area—and make suggestions for the best drug, dosage, and duration.
  • Can AI bring personalized tutors to every student? The AI education tools being piloted today are mind-blowing because they are tailored to each individual learner. Some of them—like Khanmigo and MATHia—are already remarkable, and they’ll only get better in the years ahead. One of the things that excites me the most about this type of technology is the possibility of localizing it to every student, no matter where they live. For example, a team in Nairobi is working on Somanasi, an AI-based tutor that aligns with the curriculum in Kenya. The name means “learn together” in Swahili, and the tutor has been designed with the cultural context in mind so it feels familiar to the students who use it.
  • Can AI help treat high-risk pregnancies? A woman dies in childbirth every two minutes. That’s a horrifying statistic, but I’m hopeful that AI can help. Last year, I wrote about how AI-powered ultrasounds could help identify pregnancy risks. This year, I was excited to meet some of the researchers at ARMMAN, who hope to use artificial intelligence to improve the odds for new mothers in India. Their large language model will one day act as a copilot for health workers treating high-risk pregnancies. It can be used in both English and Telugu, and the coolest part is that it automatically adjusts to the experience level of the person using it—whether you’re a brand-new nurse or a midwife with decades of experience.
  • Can AI help people assess their risk for HIV? For many people, talking to a doctor or nurse about their sexual history can be uncomfortable. But this information is super important for assessing risk for diseases like HIV and prescribing preventive treatments. A new South African chatbot aims to make HIV risk assessment a lot easier. It acts like an unbiased and nonjudgmental counselor who can provide around-the-clock advice. Sophie Pascoe and her team are developing it specifically with marginalized and vulnerable populations in mind—populations that often face stigma and discrimination when seeking preventive care. Their findings suggest that this innovative approach may help more women understand their own risk and take action to protect themselves.
  • Could AI make medical information easier to access for every health worker? When you’re treating a critical patient, you need quick access to their medical records to know if they’re allergic to a certain drug or have a history of heart problems. In places like Pakistan, where many people don’t have any documented medical history, this is a huge problem. Maryam Mustafa’s team is working on a voice-enabled mobile app that would make it a lot easier for maternal health workers in Pakistan to create medical records. It asks a series of prompts about a patient and uses the responses to fill out a standard medical record. Arming health workers with more data will hopefully improve the country’s pregnancy outcomes, which are among the worst in the world.

There is a long road ahead for projects like these. Significant hurdles remain, like how to scale up projects without sacrificing quality and how to provide adequate backend access to ensure they remain functional over time. But I’m optimistic that we will solve them. And I’m inspired to see so many researchers already thinking about how we deploy new technologies in low- and middle-income countries.

We can learn a lot from global health about how to make AI more equitable. The main lesson is that the product must be tailored to the people who will use it. The medical information app I mentioned is a great example: It’s common for people in Pakistan to send voice notes to one another instead of sending a text or email. So, it makes sense to create an app that relies on voice commands rather than typing out long queries. And the project is being designed in Urdu, which means there won’t be any translation issues.

If we make smart investments now, AI can make the world a more equitable place. It can reduce or even eliminate the lag time between when the rich world gets an innovation and when the poor world does.

“We can learn a lot from global health about how to make AI more equitable. The main lesson is that the product must be tailored to the people who will use it.”

If I had to make a prediction, in high-income countries like the United States, I would guess that we are 18–24 months away from significant levels of AI use by the general population. In African countries, I expect to see a comparable level of use in three years or so. That’s still a gap, but it’s much shorter than the lag times we’ve seen with other innovations.

The core of the Gates Foundation’s work has always been about reducing this gap through innovation. I feel like a kid on Christmas morning when I think about how AI can be used to get game-changing technologies out to the people who need them faster than ever before. This is something I am going to spend a lot of time thinking about next year.

ChatGPT Prompting Advice by OpenAI (with examples)

In case you missed it, OpenAI released a new prompting guide. I thought it was going to be pretty generic but it’s actually very helpful and profound.

I want to share my key take-aways that I thought were the most insightful and I simplified it a bit (as OpenAI’s guide is a bit complicated imo). I also included some examples of how I would apply OpenAI’s advice.

My 4 favourite take-aways:

  1. Split big problems into smaller ones

If you have a big or complicated question, try breaking it into smaller parts.

For example, don’t ask: “write a marketing plan on x”, but first ask “what makes an excellent marketing plan?” and then tackle individually each of the steps of a marketing plan with ChatGPT.

2. Using examples of your ideal outcome

Providing examples can guide ChatGPT to better answers. It’s similar to showing someone an example of what you’re talking about to make sure you’re both on the same page.

For example, if you have already created a marketing plan then you can use that as example input.

3. Use reference materials from external sources

If you need to solve a specific problem then you can also bring external sources within ChatGPT to get the job done faster and better.

For example, let’s imagine you are still working on that marketing plan and you are not able to get to the right results with only using ChatGPT.

You can go to reliable source that tells you how to create a solid marketing-plan, for example a CMO with a marketing blog. You can provide that as input for ChatGPT to build further upon simply by copying all the information directly into ChatGPT.

4. Using chain of thought for complex problems (my favourite)

This one’s like asking someone to explain their thinking process out loud.

When you’re dealing with tough questions, instead of just asking for the final answer, you can ask ChatGPT to show its “chain of thought”.

It’s like when you’re solving a math problem and write down each step. This helps in two ways:

  1. It makes the reasoning of ChatGPT clear, so you can see how it got to the answer.

  2. It’s easier to spot a mistake and correct it to get to your ideal outcome.

It also ‘slows-down’ the thinking of ChatGPT and can also lead to a better outcome.

2024 is world’s biggest election year ever and AI experts say we’re not prepared

  • The year 2024 is expected to have the largest number of elections worldwide, with over two billion people across 50 countries heading to the polls.

  • Experts warn that we are not prepared for the impact of AI on these elections, as generative AI tools like ChatGPT and Midjourney have gone mainstream.

  • There is a concern about AI-driven misinformation and deepfakes spreading at a larger scale, particularly in the run-up to the elections.

  • Governments are considering regulations for AI, but there is a need for an agreed international approach.

  • Fact-checkers are calling for public awareness of the dangers of AI fakes to help people recognize fake images and question what they see online.

  • Social media companies are legally required to take action against misinformation and disinformation, and the UK government has introduced the Online Safety Act to remove illegal AI-generated content.

  • Individuals are advised to verify what they see, diversify their news sources, and familiarize themselves with generative AI tools to understand how they work.

Source: https://news.sky.com/story/2024-is-worlds-biggest-election-year-ever-and-ai-experts-say-were-not-prepared-13030960

What Else Is Happening in AI on December 21st, 2023

📥ChatGPT now lets you archive chats.

Archive removes chats from your sidebar without deleting them. You can see your archived chats in Settings. The feature is currently available on the Web and iOS and is coming soon on Android. (Link)

📰Runway ML is Introducing TELESCOPE MAGAZINE.

An exploration of art, technology, and human creativity. It is designed and developed in-house and will be available for purchase in early January 2024. 

💰Anthropic to raise $750 million in Menlo Ventures-led deal.

Anthropic is in talks to raise $750 million in a venture round led by Menlo Ventures that values the two-year-old AI startup at $15 billion (not including the investment), more than three times its valuation this spring. The round hasn’t finalized. The final price could top $18 billion. (Link)

🤝LTIMindtree collaborates with Microsoft for AI-powered applications.

It will use Microsoft Azure OpenAI Service and Azure Cognitive Search to enable AI-led capabilities, including content summarisation, graph-led knowledge structuring, and an innovative copilot. (Link)

🌐EU to expand support for AI startups to tap its supercomputers for model training.

The plan is for “centers of excellence” to be set up to support the development of dedicated AI algorithms that can run on the EU’s supercomputers. An “AI support center” is also on the way to have “a special track” for SMEs and startups to get help to get the most out of the EU’s supercomputing resources. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 20: AI Daily News – December 20th, 2023

🎥 Google’s VideoPoet is the ultimate all-in-one video AI
🎵 Microsoft Copilot turns your ideas into songs with Suno
💡 Runway introduces text-to-speech and video ratios for Gen-2

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon

🧠 AI beats humans for the first time in physical skill game

🔍 Google Gemini is not even as good as GPT-3.5 Turbo, researchers find

🚀 Blue Origin’s New Shepard makes triumphant return flight

🚫 Adobe explains why it abandoned the Figma deal

🚤 Elon Musk wants to turn Cybertrucks into boats

Google’s VideoPoet is the ultimate all-in-one video AI

Google’s VideoPoet is the ultimate all-in-one video AI
Google’s VideoPoet is the ultimate all-in-one video AI

To explore the application of language models in video generation, Google Research introduces VideoPoet, an LLM that is capable of a wide variety of video generation tasks, including:

  • Text-to-video
  • Image-to-video
  • Video editing
  • Video stylization
  • Video inpainting and outpainting
  • Video-to-audio

VideoPoet is a simple modeling method that can convert any autoregressive language model or large language model (LLM) into a high-quality video generator. It demonstrates state-of-the-art video generation, in particular in producing a wide range of large, interesting, and high-fidelity motions.

Why does this matter?

Leading video generation models are almost exclusively diffusion-based. But VideoPoet uses LLMs’ exceptional learning capabilities across various modalities to generate videos that look smoother and more consistent over time.

Notably, it can also generate audio for video inputs and longer duration clips from short input context which shows strong object identity preservation not seen in prior works.

Source

Microsoft Copilot turns your ideas into songs with Suno

Microsoft has partnered with Suno, a leader in AI-based music creation, to bring their capabilities to Microsoft Copilot. Users can enter prompts into Copilot and have Suno, via a plug-in, bring their musical ideas to life. Suno can generate complete songs– including lyrics, instrumentals, and singing voices.

This will open new horizons for creativity and fun, making music creation accessible to everyone. The experience will begin rolling out to users starting today, ramping up in the coming weeks.

Why does this matter?

While many of the ethical and legal issues around AI-synthesized music have yet to be ironed out, tech giants and startups are increasingly investing in GenAI-based music creation tech. DeepMind and YouTube partnered to release Lyria and Dream Track, Meta has published several experiments, Stability AI and Riffusion have launched platforms and apps; now, Microsoft is joining the movement.

Source

Runway introduces text-to-speech and video ratios for Gen-2

  • Text to Speech: Users can now generate voiceovers and dialogue with simple-to-use and highly expressive Text-to-speech. It is available for all plans starting today.
  • Ratios for Gen-2: Quickly and easily change the ratio of your generations to better suit the channels you’re creating for. Choose from 16:9, 9:16, 1:1, 4:3, 3:4.

Why does this matter?

These new features add more control and expressiveness to creations inside Runway. It also plans to release more updates for improved control over the next few weeks. Certainly, audio and video GenAI is set to take off in the coming year.

Source

What Else Is Happening in AI on December 20th, 2023

🌍Google expands access to AI coding in Colab across 175 locales.

It announced the expansion of code assistance features to all Colab users, including users on free-of-charge plans. Anyone in eligible locales can now try AI-powered code assistance in Colab. (Link)

🔐Stability AI announces paid membership for commercial use of its models.

It is now offering a subscription service that standardizes and changes how customers can use its models for commercial purposes. With three tiers, this will aim to strike a balance between profitability and openness. (Link)

🎙️TomTom and Microsoft develop an in-vehicle AI voice assistant.

Digital maps and location tech specialist TomTom partnered with Microsoft to develop an AI voice assistant for vehicles. It enables voice interaction with location search, infotainment, and vehicle command systems. It uses multiple Microsoft products, including Azure OpenAI Service. (Link)

🏠Airbnb is using AI to help clampdown on New Year’s Eve parties globally.

The AI-powered technology will help enforce restrictions on certain NYE bookings in several countries and regions. Airbnb’s anti-party measures have seen a decrease in the rate of party reports over NYE, as thousands globally stopped from booking last year. (Link)

🤖AI robot outmaneuvers humans in maze run breakthrough.

Researchers at ETH Zurich have created an AI robot called CyberRunner they say surpassed humans at the popular game Labyrinth. It navigated a small metal ball through a maze by tilting its surface, avoiding holes across the board, and mastering the toy in just six hours. (Link)

 Google Gemini is not even as good as GPT-3.5 Turbo, researchers find

  • Google’s Gemini Pro, designed to compete with ChatGPT, performs worse on many tasks compared to OpenAI’s older model, GPT-3.5 Turbo, according to new research.
  • Despite Google claiming superior performance in its own research, an independent study showcases Gemini Pro falling behind GPT models in areas like reasoning, mathematics, and programming.
  • However, Google’s Gemini Pro excels in language translation across several languages, despite its generally lower performance in other AI benchmarks.
  • Source

Microsoft Copilot now lets you create AI songs from text prompts. Source.

Google Brain co-founder tests AI doomsday threat by trying to get ChatGPT to kill everyone. Source

GPT-4 driven robot takes selfies, ‘eats’ popcorn. Source

A Daily Chronicle of AI Innovations in December 2023 – Day 19: AI Daily News – December 19th, 2023

🔥 OpenAI’s new ‘Preparedness Framework’ to track AI risks
🚀 Google Research’s new approach to improve performance of LLMs
🖼️ NVIDIA’s new GAvatar creates realistic 3D avatars

🤖 OpenAI lays out plan for dealing with dangers of AI

💔 Adobe and Figma call off $20 billion acquisition after regulatory scrutiny

⌚ Apple will halt sales of its newest watches in the US over a patent dispute

🚗 TomTom and Microsoft are launching an AI driving assistant

💸 Google to pay $700 million in Play Store settlement

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon

OpenAI’s new ‘Preparedness Framework’ to track AI risks

OpenAI published a new safety preparedness framework to manage AI risks; They are strengthening its safety measures by creating a safety advisory group and granting the board veto power over risky AI. The new safety advisory group will provide recommendations to leadership, and the board will have the authority to veto decisions.

OpenAI's new ‘Preparedness Framework’ to track AI risks
OpenAI’s new ‘Preparedness Framework’ to track AI risks

OpenAI’s updated “Preparedness Framework” aims to identify and address catastrophic risks. The framework categorizes risks and outlines mitigations, with high-risk models prohibited from deployment and critical risks halting further development. The safety advisory group will review technical reports and make recommendations to leadership and the board, ensuring a higher level of oversight.

OpenAI's new ‘Preparedness Framework’ to track AI risks
OpenAI’s new ‘Preparedness Framework’ to track AI risks

Why does this matter?

OpenAI’s updated safety policies and oversight procedures demonstrate a commitment to responsible AI development. As AI systems grow more powerful, thoughtfully managing risks becomes critical. OpenAI’s Preparedness Framework provides transparency into how they categorize and mitigate different types of AI risks.

Source

Google Research’s new approach to improve LLM performance

Google Research released a new approach to improve the performance of LLMs; It answers complex natural language questions. The approach combines knowledge retrieval with the LLM and uses a ReAct-style agent that can reason and act upon external knowledge.

Google Research’s new approach to improve LLM performance
Google Research’s new approach to improve LLM performance

The agent is refined through a ReST-like method that iteratively trains on previous trajectories, using reinforcement learning and AI feedback for continuous self-improvement. After just two iterations, a fine-tuned small model is produced that achieves comparable performance to the large model but with significantly fewer parameters.

Google Research’s new approach to improve LLM performance
Google Research’s new approach to improve LLM performance

Why does this matter?

Having access to relevant external knowledge gives the system greater context for reasoning through multi-step problems. For the AI community, this technique demonstrates how the performance of language models can be improved by focusing on knowledge and reasoning abilities in addition to language mastery.

Source

NVIDIA’s new GAvatar creates realistic 3D avatars

Nvidia has announced GAvatar, a new technology that allows for creating realistic and animatable 3D avatars using Gaussian splatting. Gaussian splatting combines the advantages of explicit (mesh) and implicit (NeRF) 3D representations.

NVIDIA’s new GAvatar creates realistic 3D avatars
NVIDIA’s new GAvatar creates realistic 3D avatars

 However, previous methods using Gaussian splatting had limitations in generating high-quality avatars and suffered from learning instability. To overcome these challenges, GAvatar introduces a primitive-based 3D Gaussian representation, uses neural implicit fields to predict Gaussian attributes, and employs a novel SDF-based implicit mesh learning approach.

NVIDIA’s new GAvatar creates realistic 3D avatars
NVIDIA’s new GAvatar creates realistic 3D avatars

GAvatar outperforms existing methods in terms of appearance and geometry quality and achieves fast rendering at high resolutions.

Why does this matter?

This cleverly combines the best of both mesh and neural network graphical approaches. Meshes allow precise user control, while neural networks handle complex animations. By predicting avatar attributes with neural networks, GAvatar enables easy customization. Using a novel technique called Gaussian splatting, GAvatar reaches new levels of realism.

Source

What Else Is Happening in AI on December 19th, 2023

🚀 Accenture launches GenAI Studio in Bengaluru India, to accelerate Data and AI

Its part of $3bn investment. The studio will offer services such as the proprietary GenAI model “switchboard,” customization techniques, model-managed services, and specialized training programs. The company plans to double its AI talent to 80K people in the next 3 years through hiring, acquisitions, and training. (Link)

🧳 Expedia is looking to use AI to compete with Google trip-planning business

Expedia wants to develop personalized customer recommendations based on their travel preferences and previous trips to bring more direct traffic. They aim to streamline the travel planning process by getting users to start their search on its platform instead of using external search engines like Google. (Link)

🤝 Jaxon AI partners with IBM Watsonx to combat AI hallucination in LLMS

The company’s technology- Domain-Specific AI Language (DSAIL), aims to provide more reliable AI solutions. While AI hallucination in content generation may not be catastrophic in some cases, it can have severe consequences if it occurs in military technology. (Link)

👁️ AI-Based retinal analysis for childhood autism diagnosis with 100% accuracy

Researchers have developed this method, and by analyzing photographs of children’s retinas, a deep learning AI algorithm can detect autism, providing an objective screening tool for early diagnosis. This is especially useful when access to a specialist child psychiatrist is limited. (Link)

🌊 Conservationists using AI to help protect coral reefs from climate change

The Coral Restoration Foundation (CRF) in Florida has developed a tool called CeruleanAI, which uses AI to analyze 3D maps of reefs and monitor restoration efforts. AI allows conservationists to track the progress of restoration efforts more efficiently and make a bigger impact. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 18: AI Daily News – December 18th, 2023

🧮 Google DeepMind’s LLM solves complex math
📘 OpenAI released its Prompt Engineering Guide
🤫 ByteDance secretly uses OpenAI’s Tech

🚀 Jeff Bezos discusses plans for trillion people to live in huge cylindrical space stations

💰 Elon Musk told bankers they wouldn’t lose any money on Twitter purchase

👂 Despite the denials, ‘your devices are listening to you,’ says ad company

🚗 Tesla’s largest recall won’t fix Autopilot safety issues, experts say

Google DeepMind’s LLM solves complex math

Google DeepMind has used an LLM called FunSearch to solve an unsolved math problem. FunSearch combines a language model called Codey with other systems to suggest code that will solve the problem. After several iterations, FunSearch produced a correct and previously unknown solution to the cap set problem.

This approach differs from DeepMind’s previous tools, which treated math problems as puzzles in games like Go or Chess. FunSearch has the advantage of finding solutions to a wide range of problems by producing code, and it has shown promising results in solving the bin packing problem.

Why does this matter?

FunSearch’s ability to solve an unsolved math problem showcases AI matches high-level human skills in several ways. Its advances in core reasoning abilities for AI, such as displayed by FunSearch, will likely unlock further progress in developing even more capable AI. Together, these interrelated impacts mean automated math discoveries like this matter greatly for advancing AI toward more complex human thinking.

Source

OpenAI released its Prompt Engineering Guide

OpenAI released its own Prompt Engineering Guide. This guide shares strategies and tactics for improving results from LLMs like GPT-4. The methods described in the guide can sometimes be combined for greater effect. They encourage experimentation to find the methods that work best for you.

The OpenAI Platform provides six strategies for getting better results with language models. These strategies include writing clear instructions, providing reference text, splitting complex tasks into simpler subtasks, giving the model time to think, using external tools to compensate for weaknesses, and testing changes systematically. By following these strategies, users can improve the performance and reliability of the language models.

Why does this matter?

Releasing an open prompt engineering guide aligns with OpenAI’s mission to benefit humanity. By empowering more people with skills to wield state-of-the-art models properly, outcomes can be directed toward more constructive goals rather than misuse – furthering responsible AI development.

Source

ByteDance secretly uses OpenAI’s Tech

ByteDance, the parent company of TikTok, has been secretly using OpenAI’s technology to develop its own LLM called Project Seed. This goes against OpenAI’s terms of service, prohibiting the use of their model output to develop competing AI models.

Internal documents confirm that ByteDance has relied on the OpenAI API for training and evaluating Project Seed. This practice is considered a faux pas in the AI world, and Microsoft, through which ByteDance accesses OpenAI, has the same policy

Why does this matter?

ByteDance’s use of OpenAI’s tech highlights the intense competition in the generative AI race. Ultimately, this case highlights the priority of integrity and transparency in progressing AI safely.

Source

What Else Is Happening in AI on December 18th, 2023

💡 Deloitte is turning towards AI to avoid mass layoffs in the future

The company plans to use AI to assess the skills of its existing employees and identify areas where they can be shifted to meet demand. This move comes after Deloitte hired 130,000 new staff members this year but warned thousands of US and UK employees that their jobs were at risk of redundancy due to restructuring. (Link)

🌐 Ola’s founder have announced an Indian LLM

This new multilingual LLM will have generative support for 10 Indian languages and will be able to take inputs in a total of 22 languages. It has been trained on over two trillion tokens of data for Indian languages. And will be trained on ‘Indian ethos and culture’. The company will also develop data centers, supercomputers for AI, and much more. (Link)

🧸 Grimes partnered with Curio Toys to create AI toys for children

Musician Grimes has partnered with toy company Curio to create a line of interactive AI plush toys for children. The toys, named Gabbo, Grem, and Grok, can converse with and “learn” the personalities of their owners. The toys require a Wi-Fi connection and come with an app that provides parents with a written transcript of conversations. (Link)

🔧 Agility uses LLMs to enhance communication with its humanoid robot- Digit

The company has created a demo space where Digit is given natural language commands of varying complexity to see if it can execute them. The robot is able to pick up a box of a specific color and move it to a designated tower, showcasing the potential of natural language communication in robotics. (Link)

🍔 CaliExpress is hailed as the world’s first autonomous AI restaurant

The eatery, set to open before the end of the year, will feature robots that can make hamburgers and French fries. However, the restaurant will still have human employees who will pack the food and interact with customers. (Link)

🚀 Jeff Bezos discusses plans for trillion people to live in huge cylindrical space stations

  • Jeff Bezos envisions humanity living in massive cylindrical space stations, as per his recent interview with Lex Fridman.
  • Bezos shared his aspiration for a trillion people to live in the solar system, facilitated by these space habitats, citing the potential to have thousands of Mozarts and Einsteins at any given time.
  • His vision contrasts with Elon Musk’s goal of establishing cities on planets like Mars, seeing Earth as a holiday destination and highlighting the future role of AI and Amazon’s influence in space living.
  • Source

 Despite the denials, ‘your devices are listening to you,’ says ad company

  • An advertising company has recently claimed that it can deploy “active listening” technology through devices like smartphones and smart TVs to target ads based on voice data from everyday conversations.
  • This controversial claim suggests that these targeted advertisements can be directed at individuals using specific phrases they say, intensifying concerns about privacy and surveillance in the digital age.
  • The assertion highlights a growing debate about the balance between technological advancement in advertising and the imperative to protect individual privacy rights in an increasingly digital world.
  • Source

Tesla’s largest recall won’t fix Autopilot safety issues, experts say

  • Tesla agreed to a software update for 2 million cars to improve driver attention on Autopilot, though experts believe it doesn’t address the main issue of limiting where Autopilot can be activated.
  • The National Highway Traffic Safety Administration is still investigating Autopilot after over 900 crashes, but the recall only adds alerts without restricting the feature to designated highways.
  • Tesla’s recall introduces more “controls and alerts” for Autopilot use but does not prevent drivers from using it outside the intended operational conditions, despite safety concerns.
  • Source

A Daily Chronicle of AI Innovations in December 2023 – Day 16: AI Daily News – December 16th, 2023

🤖 OpenAI demos a control method for Superintelligent AI

🧠 DeepMind’s AI finds new solution to decades-old math puzzle

🛰 Amazon’s internet satellites will communicate using space lasers

📍 Google finally stops handing your location data to cops

🚗 GM removes Apple CarPlay and Android Auto from cars over safety concerns

OpenAI demos a control method for Superintelligent AI

  • OpenAI initiated a superalignment program to ensure future superintelligent AI aligns with human goals, and they aim to find solutions by 2027.
  • Researchers tested whether a less capable AI, GPT-2, could oversee a more powerful AI, GPT-4, finding the stronger AI could outperform its weaker supervisor, especially in NLP tasks.
  • OpenAI is offering $10 million in grants to encourage diverse approaches to AI alignment and to gather insights on supervising future superhuman AI models.
  • Source

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering Guide,” available at Etsy, Shopify, Apple, Google, or Amazon

 DeepMind’s AI finds new solution to decades-old math puzzle

  • DeepMind’s AI, FunSearch, has found a new approach to the long-standing “cap set puzzle,” surpassing previous human-led solutions.
  • The FunSearch model uses a combination of a pre-trained language model and an evaluator to prevent the production of incorrect information.
  • This advancement in AI could inspire further scientific discovery by providing explainable solutions that assist ongoing research.
  • Source

 Amazon’s internet satellites will communicate using space lasers

  • Amazon’s Project Kuiper is enhancing satellite internet by building a space-based mesh network using high-speed laser communications.
  • Successful tests have demonstrated quick data transfer speeds of up to 100 gigabits per second between satellites using optical inter-satellite links.
  • With plans for full deployment in 2024, Project Kuiper aims to provide fast and resilient internet connectivity globally, surpassing the capabilities of terrestrial fiber optics.
  • Source

Google finally stops handing your location data to cops

  • Google is changing how it collects location data, limiting its role in geofence warrants used by police.
  • Location data will remain on users’ phones if they choose Google’s tracking settings, enhancing personal privacy.
  • The change may reduce data available for police requests but may not impact Google’s use of data for advertising.
  • Source

 GM removes Apple CarPlay and Android Auto from cars over safety concerns

  • GM plans to replace Apple CarPlay and Android Auto with its own infotainment system, citing stability issues and safety concerns.
  • The new system will debut in the 2024 Chevrolet Blazer EV, requiring drivers to use built-in apps rather than phone mirroring.
  • GM aims to integrate its infotainment system with its broader ecosystem, potentially increasing subscription revenue.
  • Source

DeepMind’s FunSearch: Google’s AI Unravels Mathematical Enigmas Once Deemed Unsolvable by Humans

DeepMind, a part of Google, has made a remarkable stride in AI technology with its latest innovation, FunSearch. This AI chatbot is not just adept at solving complex mathematical problems but also uniquely equipped with a fact-checking feature to ensure accuracy. This development is a dramatic leap forward in the realm of artificial intelligence.

Here’s a breakdown of its key features:

  1. Groundbreaking Fact-Checking Capability: Developed by Google’s DeepMind, FunSearch stands out with an evaluator layer, a novel feature that filters out incorrect AI outputs, enhancing the reliability and precision of its solutions.

  2. Addressing AI Misinformation: FunSearch tackles the prevalent issue of AI ‘hallucinations’ — the tendency to produce misleading or false results — ensuring a higher degree of trustworthiness in its problem-solving capabilities.

  3. Innovative Scientific Contributions: Beyond conventional AI models, FunSearch, a product of Google’s AI expertise, is capable of generating new scientific knowledge, especially in the fields of mathematics and computer science.

  4. Superior Problem-Solving Approach: The AI model demonstrates an advanced method of generating diverse solutions and critically evaluating them for accuracy, leading to highly effective and innovative problem-solving strategies.

  5. Broad Practical Applications: Demonstrating its superiority in tasks like the bin-packing problem, FunSearch, emerging from Google’s technological prowess, shows potential for widespread applications in various industries.

Source: (NewScientist)

A Daily Chronicle of AI Innovations in December 2023 – Day 15: AI Daily News – December 15th, 2023

💰 OpenAI granting $10M to solve the alignment problem
📹 Alibaba released ‘12VGen-XL’ image-to-video AI
💻 Intel’s new Core Ultra CPUs bring AI capabilities to PCs

🎓 Elon Musk wants to open a university

🖼️ Midjourney to launch a new platform for AI image generation

🔬 Intel entering the ‘AI PC’ era with new chips

🚀 SpaceX blasts FCC as it refuses to reinstate Starlink’s $886 million grant

🌍 Threads launches for nearly half a billion more users in Europe

🛠️ Trains were designed to break down after third-party repairs, hackers find

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs - Simplified Guide for Everyday Users
AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users

OpenAI granting $10M to solve the alignment problem

OpenAI, in partnership with Eric Schmidt, is launching a $10 million grants program called “Superalignment Fast Grants” to support research on ensuring the alignment and safety of superhuman AI systems. They believe that superintelligence could emerge within the next decade, posing both great benefits and risks.

Existing alignment techniques may not be sufficient for these advanced AI systems, which will possess complex and creative behaviors beyond human understanding. OpenAI aims to bring together the best researchers and engineers to address this challenge and offers grants ranging from $100,000 to $2 million for academic labs, nonprofits, and individual researchers. They are also sponsoring a one-year fellowship for graduate students.

Why does this matter?

With $10M in new grants to tackle the alignment problem, OpenAI is catalyzing critical research to guide AI’s development proactively. By mobilizing top researchers now, years before advanced systems deployment, they have their sights set on groundbreaking solutions to ensure these technologies act for the benefit of humanity.

Source

Alibaba released ‘12VGen-XL’ image-to-video AI

Alibaba released 12VGen-XL, a new image-to-video model, It is capable of generating high-definition outputs. It uses cascaded diffusion models and static images as guidance to ensure alignment and enhance model performance.

The approach consists of 2 stages: a base stage for coherent semantics and content preservation and a refinement stage for detail enhancement and resolution improvement. The model is optimized using a large dataset of text-video and text-image pairs. The source code and models will be publicly available.

Why does this matter?

Generating videos from just images and text prompts – This level of control and alignment shows the immense creativity and personalization that generative video brings in sectors from media to marketing. This release brings another competitor to the expanding AI video-gen sector, with capabilities ramping up at a truly insane pace.

Source

Intel’s new Core Ultra CPUs bring AI capabilities to PCs

Intel has launched its Intel Core Ultra mobile processors, which bring AI capabilities to PCs. These processors offer improved power efficiency, compute and graphics performance, and an enhanced AI PC experience.

They will be used in over 230 AI PCs from partners such as Acer, ASUS, Dell, HP, Lenovo, and Microsoft Surface. Intel believes that by 2028, AI PCs will make up 80% of the PC market, and they are well-positioned to deliver this next generation of computing.

Why does this matter?

Intel believes that by 2028, AI PCs will make up 80% of the PC market, and they are well-positioned to deliver this next generation of computing. With dedicated AI acceleration capability spread across the CPU, GPU, and NPU architectures, Intel Core Ultra is the most AI-capable and power-efficient client processor in Intel’s history.

Source

How to Run ChatGPT-like LLMs Locally on Your Computer in 3 Easy Steps

A Step-by-Step Tutorial for using LLaVA 1.5 and Mistral 7B on your Mac or Windows. Source.

What is llamafile?

Llamafile transforms LLM weights into executable binaries. This technology essentially packages both the model weights and the necessary code required to run an LLM into a single, multi-gigabyte file. This file includes everything needed to run the model, and in some cases, it also contains a full local server with a web UI for interaction. This approach simplifies the process of distributing and running LLMs on multiple operating systems and hardware architectures, thanks to its compilation using Cosmopolitan Libc.

This innovative approach simplifies the distribution and execution of LLMs, making it much more accessible for users to run these models locally on their own computers.

What is LLaVA 1.5?

LLaVA 1.5 is an open-source large multimodal model that supports text and image inputs, similar to GPT-4 Vision. It is trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

What is Mistral 7B?

Mistral 7B is an open-source large language model with 7.3 billion parameters developed by Mistral AI. It excels in generating coherent text and performing various NLP tasks. Its unique sliding window attention mechanism allows for faster inference and handling of longer text sequences. Notable for its fine-tuning capabilities, Mistral 7B can be adapted to specific tasks, and it has shown impressive performance in benchmarks, outperforming many similar models.


Here’s how to start using LLaVA 1.5 or Mistral 7B on your own computer leveraging llamafile. Don’t get intimidated, the setup process is very straightforward!

Setting Up LLaVA 1.5

One Time Setup

  1. Open Terminal: Before beginning, you need to open the Terminal application on your computer. On a Mac, you can find it in the Utilities folder within the Applications folder, or you can use Spotlight (Cmd + Space) to search for “Terminal.”
  2. Download the LLaVA 1.5 llamafile: Pick your preferred option to download the llamafile for LLaVA 1.5 (around 4.26GB):
    1. Go to Justine’s repository of LLaVA 1.5 on Hugging Face and click download or just click here and the download should start directly.
    2. Use this command in the Terminal:
      curl -LO https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llava-v1.5-7b-q4-server.llamafile
  3. Make the Binary Executable: Once downloaded, use the Terminal to navigate to the folder where the file was downloaded, e.g. Downloads, and make the binary executable:
    cd ~/Downloads
    chmod 755 llava-v1.5-7b-q4-server.llamafile

    For Windows, simply add .exe at the end of the file name.

Using LLaVA 1.5

Every time you want to use LLaVA on your compute follow these steps:

  1. Run the Executable: Start the web server by executing the binary1:
    ./llava-v1.5-7b-q4-server.llamafile

    This command will launch a web server on port 8080.

  2. Access the Web UI: To start using the model, open your web browser and navigate to http://127.0.0.1:8080/ (or click the link to open directly).

Terminating the process

Once you’re done using the LLaVA 1.5 model, you can terminate the process. To do this, return to the Terminal where the server is running. Simply press Ctrl + C. This key combination sends an interrupt signal to the running server, effectively stopping it.

Setting Up Mistral 7B

One Time Setup

  1. Open Terminal
  2. Download the Mistral 7B llamafile: Pick your preferred option to download the llamafile for Mistral 7B (around 4.37 GB):
    1. Go to Justine’s repository of Mistral 7B on Hugging Face and click download or just click here and the download should start directly.
    2. Use this command in the Terminal:
      curl -LO https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/mistral-7b-instruct-v0.1-Q4_K_M-server.llamafile
  3. Make the Binary Executable: Once downloaded, use the Terminal to navigate to the folder where the file was downloaded, e.g. Downloads, and make the binary executable:
    cd ~/Downloads
    chmod 755 mistral-7b-instruct-v0.1-Q4_K_M-server.llamafile

    For Windows, simply add .exe at the end of the file name.

Using Mistral 7B

Every time you want to use LLaVA on your compute follow these steps:

  1. Run the Executable: Start the web server by executing the binary:
    ./mistral-7b-instruct-v0.1-Q4_K_M-server.llamafile

    This command will launch a web server on port 8080.

  2. Access the Web UI: To start using the model, open your web browser and navigate to http://127.0.0.1:8080/ (or click the link to open directly).

Terminating the process

Once you’re done using the Mistral 7B model, you can terminate the process. To do this, return to the Terminal where the server is running. Simply press Ctrl + C. This key combination sends an interrupt signal to the running server, effectively stopping it.

Conclusion

The introduction of llamafile significantly simplifies the deployment and use of advanced LLMs like LLaVA 1.5 or Mistral 7B for personal, development, or research purposes. This tool opens up new possibilities in the realm of AI and machine learning, making it more accessible for a wider range of users.

The first time only, you might be asked to install the command line developer tools; just click on Install:

What Else Is Happening on December 15th, 2023

🛠 Instagram introduces a new AI background editing tool for U.S.-based users

The tool allows users to change the background of their images through prompts for Stories. Users can choose from ready prompts or write their own prompts. When a user posts a Story with the newly generated background, others will see a “Try it” sticker with the prompt, allowing them also to use this tool. (Link)

🚀 Microsoft continues to advance tooling support in Azure AI Studio

They have made over 25 announcements at Microsoft Ignite, including adding 40 new models to the Azure AI model catalog, new multimodal capabilities in Azure OpenAI Service, and the public preview of Azure AI Studio. (Link)

🔍 Google is reportedly working on an AI assistant for Pixels called “Pixie”

It will use the information on a user’s phone, such as data from Maps and Gmail, to become a more “personalized” version of Google Assistant, according to a report from The Information. The feature could reportedly launch in the Pixel 9 and 9 Pro next year. (Link)

🧠 DeepMind’s AI has surpassed human mathematicians in solving unsolved combinatorics problems

This is the first time an LLM-based system has gone beyond existing knowledge in the field. Previous experiments have used LLMs to solve math problems with known solutions, but this breakthrough demonstrates the AI’s effectiveness in tackling unsolved problems. (Link)

💼 H&R Block announces AI tax filing assistant

Which answers users’ tax filing questions. Accessed through paid versions of H&R Block’s DIY tax software, the chatbot provides information on tax rules, exemptions, and other tax-related issues. It also directs users to human tax experts for personalized advice.  (Link)

 Elon Musk wants to open a university

  • Elon Musk aims to create a university in Austin, Texas, focusing on STEM education and offering hands-on learning experiences.
  • The university will be ‘dedicated to education at the highest levels,’ according to tax documents obtained by Bloomberg.
  • Musk’s educational plans also include opening STEM-focused K-12 schools, with potential for a Montessori-style institution within a planned town in Texas.
  • Source

🖼️ Midjourney to launch a new platform for AI image generation

  • Midjourney, a leading AI image generation service, has launched an alpha version of its website, allowing direct image creation for select users.
  • The new web interface offers a simpler user experience with visual settings adjustments and a gallery of past image generations.
  • Access to the alpha site is currently restricted to users who have created over 10,000 images on Midjourney, but it will expand to more users soon.
  • Source

🔬 Intel entering the ‘AI PC’ era with new chips

  • Intel unveils its new Core Ultra processors (part of the Meteor Lake lineup), enhancing power efficiency and performance with chiplets and integrated AI capabilities.
  • The Core Ultra 9 185H is Intel’s leading model featuring up to 16 cores, dedicated low power sections, built-in Arc GPU, and support for AI-enhanced tasks.
  • Various laptop manufacturers including MSI, Asus, Lenovo, and Acer are releasing new models with Intel’s Core Ultra chips, offering advanced specs, with availability now and through 2024.

Reducing LLM Hallucinations with Chain-of-Verification

Chain-of-Verification is a prompt engineering technique from Meta AI to reduce hallucinations in LLMs. Here is the white paper: https://arxiv.org/abs/2309.11495
How it works (from CoVe white paper):
1️⃣ Generate Baseline: Given a query, generate the response using the LLM.
2️⃣ Plan Verification(s): Given both query and baseline response, generate a list of verification questions that could help to self-analyze if there are any mistakes in the original response.
3️⃣ Execute Verification(s): Answer each verification question in turn, and hence check the answer against the original response to check for inconsistencies or mistakes.
4️⃣ Generate Final Response: Given the discovered inconsistencies (if any), generate a revised response incorporating the verification results.
I created a CoVe prompt template that you can use in any application – it’s JSON-serializable config specifically for the AI settings of your app. It allows you separates the core application logic from the generative AI settings (prompts, model routing, and parameters).

Config components for CoVe:
1️⃣ GPT4 + Baseline Generation prompt
2️⃣ GPT4 + Verification prompt
3️⃣ GPT4 + Final Response Generation prompt

Streamlit App Demo – https://chain-of-verification.streamlit.app/
Source code for the config – https://github.com/lastmile-ai/aiconfig

Generative AI Fundamentals Quiz:

Welcome to AI Unraveled, the podcast that demystifies frequently asked questions on artificial intelligence and keeps you up to date with the latest AI trends. In today’s episode, we’ll cover generative AI, unsupervised learning models, biases in machine learning systems, Google’s recommendation for responsible AI use, and the components of a transformer model.

Question 1: How does generative AI function?

Well, generative AI typically functions by using neural networks, which are a type of machine learning model inspired by the human brain. These networks learn to generate new outputs, such as text, images, or sounds, that resemble the training data they were exposed to. So, how does this work? It’s all about recognizing patterns and features in a large dataset.

You see, neural networks learn by being trained on a dataset that contains examples of what we want them to generate. For example, if we want the AI to generate realistic images of cats, we would train it on a large dataset of images of cats. The neural network analyzes these images to identify common features and patterns that make them look like cats.

Once the neural network has learned from this dataset, it can generate new images that resemble a cat. It does this by generating new patterns and features based on what it learned during training. It’s like the AI is using its imagination to create new things that it has never seen before, but that still look like cats because it learned from real examples.

So, the correct answer to this question is B. Generative AI uses a neural network to learn from a large dataset.

Question 2: If you aim to categorize documents into distinct groups without having predefined categories, which type of machine learning model would be most appropriate?

Well, when it comes to categorizing documents into distinct groups without predefined categories, the most appropriate type of machine learning model is an unsupervised learning model. You might be wondering, what is unsupervised learning?

Unsupervised learning models are ideal for tasks where you need to find hidden patterns or intrinsic structures within unlabeled data. In the context of organizing documents into distinct groups without predefined categories, unsupervised learning techniques, such as clustering, can automatically discover these groups based on the similarities among the data.

Unlike supervised learning models, which require labeled data with predefined categories or labels to train on, unsupervised learning models can work with raw, unstructured data. They don’t require prior knowledge or a labeled dataset. Instead, they analyze the data to identify patterns and relationships on their own.

So, the correct answer to this question is D. An unsupervised learning model would be most appropriate for categorizing documents into distinct groups without predefined categories.

Question 3: Per Google’s AI Principles, does bias only enter into the system at specific points in the machine learning lifecycle?

The answer here is no, bias can potentially enter into a machine learning system at multiple points throughout the ML lifecycle. It’s not just limited to specific points.

Bias can enter during the data collection stage, the model design phase, the algorithm’s training process, and even during the interpretation of results. So, it’s not restricted to certain parts of the machine learning lifecycle. Bias can be a pervasive issue that requires continuous vigilance and proactive measures to mitigate throughout the entire lifecycle of the system.

Keeping bias in check is incredibly important when developing and deploying AI systems. It’s crucial to be aware of the potential biases that can be introduced and take steps to minimize them. This includes thorough data collection and examination, diverse training sets, and ongoing monitoring and evaluation.

So, the correct answer to this question is B. False. Bias can enter into the system at multiple points throughout the machine learning lifecycle.

Question 4: What measure does Google advocate for organizations to ensure the responsible use of AI?

When it comes to ensuring the responsible use of AI, Google advocates for organizations to seek participation from a diverse range of people. It’s all about inclusivity and diversity.

Google recommends that organizations engage a wide range of perspectives in the development and deployment of AI technologies. This diversity includes not just diversity in disciplines and skill sets, but also in background, thought, and culture. By involving individuals from various backgrounds, organizations can identify potential biases and ensure that AI systems are fair, ethical, and beneficial for a wide range of users.

While it’s important to focus on efficiency and use checklists to evaluate responsible AI, these measures alone cannot guarantee the responsible use of AI. Similarly, a top-down approach to increasing AI adoption might be a strategy for implementation, but it doesn’t specifically address the ethical and responsible use of AI.

So, the correct answer to this question is C. Organizations should seek participation from a diverse range of people to ensure the responsible use of AI.

Question 5: At a high level, what are the key components of a transformer model?

Ah, the transformer model, a powerful architecture used in natural language processing. So, what are its key components? At a high level, a transformer model consists of two main components: the encoder and the decoder.

The encoder takes the input data, such as a sequence of words in a sentence, and processes it. It converts the input into a format that the model can understand, often a set of vectors. The encoder’s job is to extract useful information from the input and transform it into a meaningful representation.

Once the input has been processed by the encoder, it’s passed on to the decoder. The decoder takes this processed input and generates the output. For example, in language models, the decoder can generate the next word in a sentence based on the input it received from the encoder.

This encoder-decoder architecture is particularly powerful in handling sequence-to-sequence tasks, such as machine translation or text summarization. It allows the model to understand the context of the input and generate coherent and meaningful output.

So, the correct answer to this question is D. The key components of a transformer model are the encoder and the decoder.

That’s it for the quiz! I hope you found this information helpful and it clarified some concepts related to generative AI and machine learning models. Keep exploring and learning, and don’t hesitate to ask if you have any more questions. Happy AI adventures!

So, we’ve got a super handy book for you called “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users”. It’s got all the quizzes mentioned earlier and even more!

Now, if you’re wondering where you can get your hands on this gem, we’ve got some great news. You can find it at Etsy, Shopify, Apple, Google, or even good old Amazon. They’ve got you covered no matter where you like to shop.

So, what are you waiting for? Don’t hesitate to grab your very own copy of “AI Unraveled” right now! Whether you’re a tech enthusiast or just curious about the world of artificial intelligence, this book is perfect for everyday users like you. Trust me, you won’t want to miss out on this simplified guide that’s packed with knowledge and insights. Happy reading!

In today’s episode, we explored the fascinating world of generative AI, unsupervised learning, biases in machine learning systems, responsible AI use, and the power of transformer models, while also recommending the book ‘AI Unraveled’ for further exploration. Join us next time on AI Unraveled as we continue to demystify frequently asked questions on artificial intelligence and bring you the latest trends in AI, including ChatGPT advancements and the exciting collaboration between Google Brain and DeepMind. Stay informed, stay curious, and don’t forget to subscribe for more!

A Daily Chronicle of AI Innovations in December 2023 – Day 14: AI Daily News – December 14th, 2023

🚀 Google’s new AI releases: Gemini API, MedLM, Imagen 2, MusicFX
🤖 Stability AI introduces Stable Zero123 for quality image-to-3D generation

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep,  Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon

Google’s new AI releases: Gemini API, MedLM, Imagen 2, MusicFX

Google is introducing a range of generative AI tools and platforms for developers and Google Cloud customers.

  1. Gemini API in AI Studio and Vertex AI: Google is making Gemini Pro available for developers and enterprises to build for their own use cases. Right now, developers have free access to Gemini Pro and Gemini Pro Vision through Google AI Studio, with up to 60 requests per minute. Vertex AI developers can try the same models, with the same rate limits, at no cost until general availability early next year.
  2. Imagen 2 with text and logo generation: Imagen 2 now delivers significantly improved image quality and a host of features, including the ability to generate a wide variety of creative and realistic logos and render text in multiple languages.
  3. MedLM: It is a family of foundation models fine-tuned for the healthcare industry, generally available (via allowlist) to Google Cloud customers in the U.S. through Vertex AI. MedLM builds on Med-PaLM 2.
  4. MusicFX: It is a groundbreaking new experimental tool that enables users to generate their own music using AI. It uses Google’s MusicLM and DeepMind’s SynthID to create a unique digital watermark in the outputs, ensuring the authenticity and origin of the creations.

Google also announced the general availability of Duet AI for Developers and Duet AI in Security Operations.

Why does this matter?

Google isn’t done yet. While its impressive Gemini demo from last week may have been staged, Google is looking to fine-tune and improve Gemini based on developers’ feedback. In addition, it is also racing with rivals to push the boundaries of AI in various fields.

Source

Stability AI introduces Stable Zero123 for quality image-to-3D generation

Stable Zero123 generates novel views of an object, demonstrating 3D understanding of the object’s appearance from various angles– all from a single image input. It’s notably improved quality over Zero1-to-3 or Zero123-XL is due to improved training datasets and elevation conditioning.

Stability AI introduces Stable Zero123 for quality image-to-3D generation
Stability AI introduces Stable Zero123 for quality image-to-3D generation

The model is now released on Hugging Face to enable researchers and non-commercial users to download and experiment with it.

Why does this matter?

This marks a notable improvement in both quality and understanding of 3D objects compared to previous models, showcasing advancements in AI’s capabilities. It also sets the stage for a transformative year ahead in the world of Generative media.

Source

What Else Is Happening in AI on December 14th, 2023

📰OpenAI partners with Axel Springer to deepen beneficial use of AI in journalism.

Axel Springer is the first publishing house globally to partner with OpenAI on a deeper integration of journalism in AI technologies. The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics, and explicitly values the publisher’s role in contributing to OpenAI’s products. (Link)

🧠Accenture and Google Cloud launch joint Generative AI Center of Excellence.

It will provide businesses with the industry expertise, technical knowledge, and product resources to build and scale applications using Google Cloud’s generative AI portfolio and accelerate time-to-value. It will also help enterprises determine the optimal LLM– including Google’s latest model, Gemini– to use based on their business objectives. (Link)

🤝Google Cloud partners with Mistral AI on generative language models.

Google Cloud and Mistral AI are partnering to allow the Paris-based generative AI startup to distribute its language models on the tech giant’s infrastructure. As part of the agreement, Mistral AI will use Google Cloud’s AI-optimized infrastructure, including TPU Accelerators, to further test, build, and scale up its LLMs. (Link)

🚫Amazon CTO shares how to opt out of 3rd party AI partner access to your Dropbox. Check out the tweet here (Link)

🌍Grok expands access to 40+ countries.

Earlier, it was only available to Premium+ subscribers in the US. Check out the list of countries here. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 13: AI Daily News – December 13th, 2023

🎉 Microsoft released Phi-2, a SLM that beats the Llama 2
🔢 Anthropic has Integrated Claude with Google Sheets
📰 Channel 1 launches AI news anchors with superhuman abilities

🧠 AI built from living brain cells can recognise voices

🎮 Google loses antitrust trial against Epic Games

🌪️ Mistral shocks AI community as latest open source model eclipses GPT-3.5 performance

🔊 Meta unveils Audiobox, an AI that clones voices and generates ambient sounds

Microsoft released Phi-2, a SLM that beats the Llama 2

Microsoft released Phi-2, a small language model AI with 2.7 billion parameters that outperforms Google’s Gemini Nano 2 & LIama 2.  Phi-2 is small enough to run on a laptop or mobile device and delivers less toxicity and bias in its responses compared to other models.

Microsoft released Phi-2, a SLM that beats the Llama 2
Microsoft released Phi-2, a SLM that beats the Llama 2

It was also able to correctly answer complex physics problems and correct students’ mistakes, similar to Google’s Gemini Ultra model.

Microsoft released Phi-2, a SLM that beats the Llama 2
Microsoft released Phi-2, a SLM that beats the Llama 2

Here is the comparison between Phi-2 and Gemini Nano 2 Models on Gemini’s reported benchmarks. However, Phi-2 is currently only licensed for research purposes and cannot be used for commercial purposes.

Why does this matter?

Microsoft’s Phi-2 proved that victory doesn’t always belong to the biggest models. Even though it is compact in size, Phi-2 can outperform much larger models on important tasks like interpretability and fine-tuning. Its combination of efficiency and capabilities makes it ideal for researchers to experiment with easily. Phi-2 showcases good reasoning and language understanding, particularly in math and calculations.

Microsoft released Phi-2, a SLM that beats the Llama 2
Microsoft released Phi-2, a SLM that beats the Llama 2

Anthropic has Integrated Claude with Google Sheets

Anthropic launches a new prompt engineering tool that makes Claude accessible via spreadsheets. This allows API users to test and refine prompts within their regular workflows and spreadsheets, facilitating easy collaboration with colleagues

(This allows you to execute interactions with Claude directly in cells.)

Everything you need to know and how to get started with it.

Why does this matter?

Refining Claude’s capabilities through specialization empowers domain experts rather than replacing them. The tool’s collaborative nature also unlocks Claude’s potential at scale. Partners can curate prompts within actual projects and then implement them across entire workflows via API.

Source

Channel 1 launches AI news anchors with superhuman abilities

Channel 1 will use AI-generated news anchors that have superhuman abilities. These photorealistic anchors can speak any language and even attempt humor.

They will curate personalized news stories based on individual interests, using AI to translate and analyze data. The AI can also create footage of events that were not captured by cameras.

Channel 1 launches AI news anchors with superhuman abilities
Channel 1 launches AI news anchors with superhuman abilities

While human reporters will still be needed for on-the-ground coverage, this AI-powered news network will provide personalized, up-to-the-minute updates and information.

Why does this matter?

It’s a quantum leap in broadcast technology. However, the true impact depends on the ethics behind these automated systems. As pioneers like Channel 1 shape the landscape, they must also establish its guiding principles. AI-powered news must put integrity first to earn public trust and benefit.

Source

 AI built from living brain cells can recognise voices

  • Scientists created an AI system using living brain cells that can identify different people’s voices with 78% accuracy.
  • The new “Brainoware” technology may lead to more powerful and energy-efficient computers that emulate human brain structure and functions.
  • This advancement in AI and brain organoids raises ethical questions about the use of lab-grown brain tissue and its future as a person.
  • Source

 Google loses antitrust trial against Epic Games

  • Google was unanimously found by a jury to have a monopoly with Google Play, losing the antitrust case brought by Epic Games.
  • Epic Games seeks to enable developers to create their own app stores and use independent billing systems, with a final decision pending in January.
  • Google contests the verdict and is set to argue that its platform offers greater choice in comparison to competitors like Apple.
  • Source

Mistral shocks AI community as latest open source model eclipses GPT-3.5 performance

  • Mistral, a French AI startup, released a powerful open source AI model called Mixtral 8x7B that rivals OpenAI’s GPT-3.5 and Meta’s Llama 2.
  • The new AI model, Mixtral 8x7B, lacks safety guardrails, allowing for the generation of content without the content restrictions present in other models.
  • Following the release, Mistral secured a $415 million funding round, indicating continued development of even more advanced AI models.
  • Source

Meta unveils Audiobox, an AI that clones voices and generates ambient sounds

  • Meta unveiled Audiobox, an AI tool for creating custom voices and sound effects, building on their Voicebox technology and incorporating automatic watermarking.
  • The Audiobox platform provides advanced audio generation and editing capabilities, including the ability to distinguish generated audio from real audio to prevent misuse.
  • Meta is committed to responsible AI development, highlighting its collaboration in the AI Alliance for open-source AI innovation and accountable advancement in the field.
  • Source

What Else Is Happening in AI on December 13th, 2023

🤖 Tesla reveals its next-gen humanoid robot, Optimus Gen 2

It is designed to take over repetitive tasks from humans. The robot allows it to walk 30% faster and improve its balance. It also has brand-new hands that are strong enough to support significant weights and precise enough to handle delicate objects. Tesla plans to use the robot in its manufacturing operations and sell it. (Link)

https://twitter.com/i/status/1734756150137225501

🦊 Mozilla launches MemoryCache, An on-device, personal model with local files

MemoryCache includes a Firefox extension for saving pages and notes, a shell script for monitoring changes in the saved files, and code for updating the Firefox SaveAsPDF API. The project is currently being tested on a gaming PC with an Intel i7-8700 processor using the privateGPT model. (Link)

🕶️ Meta rolling out multimodal AI features in the Ray-Ban smart glasses

The glasses’ virtual assistant can identify objects and translate languages, and users can summon it by saying, “Hey, Meta.” The AI assistant can also translate text, show image captions, and describe objects accurately. The test period will be limited to a small number of people in the US. (Link)

👻 Snapchat+ subscribers can now create & send AI images based on text prompts

The new feature allows users to choose from a selection of prompts or type in their own, and the app will generate an image accordingly. Subscribers can also use the Dream Selfie feature with friends, creating fantastical images of themselves in different scenarios. Additionally, subscribers can access a new AI-powered extend tool that fills in the background of zoomed-in images. (Link)

🧠 A New System reads minds using a sensor-filled helmet and AI

Scientists have developed a system that can translate a person’s thoughts into written words using a sensor-filled helmet and AI. It records the brain’s electrical activity through the scalp and converts it into text using an AI model called DeWave. Its accuracy is 40%, and recent data shows an improved accuracy of over 60%. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 12: AI Daily News – December 12th, 2023

🎥 Google introduces W.A.L.T, AI for photorealistic video generation
🌍 Runway introduces general world models
🤖 Alter3, a humanoid robot generating spontaneous motion using GPT-4

👀 Financial news site uses AI to copy competitors

🤖 New model enables robots to recognize and follow humans

🔬 Semiconductor giants race to make next generation of cutting-edge chips

💸 Nvidia emerges as leading investor in AI companies

🤝 Microsoft and labor unions form ‘historic’ alliance on AI

Google introduces W.A.L.T, AI for photorealistic video generation

Researchers from Google, Stanford, and Georgia Institute of Technology have introduced W.A.L.T, a diffusion model for photorealistic video generation. The model is a transformer trained on image and video generation in a shared latent space. It can generate photorealistic, temporally consistent motion from natural language prompts and also animate any image.

It has two key design decisions. First, it uses a causal encoder to compress images and videos in a shared latent space. Second, for memory and training efficiency, it uses a window attention-based transformer architecture for joint spatial and temporal generative modeling in latent space.

Why does this matter?

The end of the traditional filmmaking process may be near… W.A.L.T’s results are incredibly coherent and stable. While there are no human-like figures or representations in the output here, it might be possible quite soon (we just saw Animate Anyone a few days ago, which can create an animation of a person using just an image).

Source

Runway introduces general world models

Runway is starting a new long-term research effort around what we call general world models. It belief behind this is that the next major advancement in AI will come from systems that understand the visual world and its dynamics.

A world model is an AI system that builds an internal representation of an environment and uses it to simulate future events within that environment. You can think of Gen-2 as very early and limited forms of general world models. However, it is still very limited in its capabilities, struggling with complex camera or object motions, among other things.

Why does this matter?

Research in world models has so far been focused on very limited and controlled settings, either in toy-simulated worlds (like those of video games) or narrow contexts (world models for driving). Runway aims to represent and simulate a wide range of situations and interactions, like those encountered in the real world. It would also involve building realistic models of human behavior, empowering AI systems further.

Source

Alter3, a humanoid robot generating spontaneous motion using GPT-4

Researchers from Tokyo integrated GPT-4 into their proprietary android, Alter3, thereby effectively grounding the LLM with Alter’s bodily movement.

Typically, low-level robot control is hardware-dependent and falls outside the scope of LLM corpora, presenting challenges for direct LLM-based robot control. However, in the case of humanoid robots like Alter3, direct control is feasible by mapping the linguistic expressions of human actions onto the robot’s body through program code.

Remarkably, this approach enables Alter3 to adopt various poses, such as a ‘selfie’ stance or ‘pretending to be a ghost,’ and generate sequences of actions over time without explicit programming for each body part. This demonstrates the robot’s zero-shot learning capabilities. Additionally, verbal feedback can adjust poses, obviating the need for fine-tuning.

Why does this matter?

It signifies a step forward in AI-driven robotics. It can foster the development of more intuitive, responsive, and versatile robotic systems that can understand human instructions and dynamically adapt their actions. Advances in this can revolutionize diverse fields, from service robotics to manufacturing, healthcare, and beyond.

Source

👀 Financial news site uses AI to copy competitors

  • A major financial news website, Investing.com, is using AI to generate stories that closely mimic those from competitor sites without giving credit.
  • Investing.com’s AI-written articles often replicate the same data and insights found in original human-written content, raising concerns about copyright.
  • While the site discloses its use of AI for content creation, it fails to attribute the original sources, differentiating it from typical news aggregators.
  • Source

 New model enables robots to recognize and follow humans

  • Italian researchers developed a new computational model enabling robots to recognize and follow specific users based on a refined analysis of images captured by RGB cameras.
  • Robots using this framework can operate on commands given through user’s hand gestures and have shown robust performance in identifying people even in crowded spaces.
  • Although effective, the model must be recalibrated if a person’s appearance changes significantly, and future improvements may include advanced learning methods for greater adaptability.
  • Source

💸 Nvidia emerges as leading investor in AI companies

  • Nvidia has significantly increased its investments in AI startups in 2023, participating in 35 deals, which is almost six times more than in 2022, making it the most active large-scale investor in the AI sector.
  • The investments by Nvidia, primarily through its venture arm NVentures, target companies that are also its customers, with interests in AI platforms and applications in various industries like healthcare and energy.
  • Nvidia’s strategy involves both seeking healthy returns and strategic partnerships, but denies prioritizing its portfolio companies for chip access, despite investing in high-profile AI companies like Inflection AI and Cohere.
  • Source

🤝 Microsoft and labor unions form ‘historic’ alliance on AI

  • Microsoft is partnering with the AFL-CIO labor union to facilitate discussions on artificial intelligence’s impact on the workforce.
  • The collaboration will include training for labor leaders and workers on AI, with aim to shape AI technology by incorporating workers’ perspectives.
  • This alliance is considered historic as it promises to influence public policy and the future of AI in relation to jobs and unionization at Microsoft.
  • Source

What Else Is Happening in AI on December 12th, 2023

🍔An AI chatbot will take your order at more Wendy’s drive-thrus.

Wendy’s is expanding its test of an AI-powered chatbot that takes orders at the drive-thru. Franchisees will get the chance to test the product in 2024. The tool, powered by Google Cloud’s AI software, is currently active in four company-operated restaurants near Columbus, Ohio. (Link)

🤝Microsoft and Labor Unions form a ‘historic’ alliance on AI and its work impact.

Microsoft is teaming up with labor unions to create “an open dialogue” on how AI will impact workers. It is forming an alliance with the American Federation of Labor and Congress of Industrial Organizations, which comprises 60 labor unions representing 12.5 million workers. Microsoft will also train workers on how the tech works. (Link)

🇻🇳Nvidia to expand ties with Vietnam, and support AI development.

The chipmaker will expand its partnership with Vietnam’s top tech firms and support the country in training talent for developing AI and digital infrastructure. Reuters reported last week Nvidia was set to discuss cooperation deals on semiconductors with Vietnamese tech companies and authorities in a meeting on Monday. (Link)

🛠️OpenAI is working to make GPT-4 less lazy.

The company acknowledged on Friday that ChatGPT has been phoning it in lately (again), and is fixing it. Then overnight, it made a series of posts about the chatbot training process, saying it must evaluate the model using certain metrics– AI benchmarks, you might say — calling it “an artisanal multi-person effort.” (Link)

This is how much AI Engineers earn in top companies

This is how much AI Engineers earn in top companies
This is how much AI Engineers earn in top companies

A Daily Chronicle of AI Innovations in December 2023 – Day 11: AI Daily News – December 11th, 2023

🚀 Google releases NotebookLM with Gemini Pro
✨ Mistral AI’s torrent-based release of new Mixtral 8x7B
🤖 Berkeley Research’s real-world humanoid locomotion

😴 OpenAI says it is investigating reports ChatGPT has become ‘lazy’

👀 Grok AI was caught plagiarizing ChatGPT

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled - Master GPT-4, Gemini, Generative AI, LLMs: A simplified Guide For Everyday Users
AI Unraveled – Master GPT-4, Gemini, Generative AI, LLMs: A simplified Guide For Everyday Users

Google releases NotebookLM with Gemini Pro

Google on Friday announced that NotebookLM, its experimental AI-powered note-taking app, is now available to users in the US. The app is also getting many new features with Gemini Pro integration. Here are a few highlights:

Save interesting exchanges as notes
A new noteboard space where you can easily pin quotes from the chat, excerpts from your sources, or your own written notes. Like before, NotebookLM automatically shares citations from your sources whenever it answers a question. But now you can quickly jump from a citation to the source, letting you see the quote in its original context.

Google NotebookLM

Helpful suggested actions

When you select a passage while reading a source, NotebookLM will automatically offer to summarize the text to a new note or help you understand technical language or complicated ideas.

Various formats for different writing projects

It has new tools to help you organize your curated notes into structured documents. Simply select a set of notes you’ve collected and ask NotebookLM to create something new. It will automatically suggest a few formats, but you can type any instructions into the chat box.

Google NotebookLM

Read everything about what’s new.

Why does this matter?

Google’s NotebookLM, fueled by LLM Gemini Pro, transforms document handling. It offers automated summaries, insightful questions, and structured note organization, revolutionizing productivity with AI-powered efficiency and smarter document engagement.

Source

Mistral AI’s torrent-based release of Mixtral 8x7B

Mistral AI has released its latest LLM, Mixtral 8x7B, via a torrent link. It is a high-quality sparse mixture of experts model (SMoE) with open weights. It outperforms Llama 2 70B on most benchmarks with 6x faster inference and matches or outperforms GPT3.5. It is pre-trained on data from the open Web.

Mixtral matches or outperforms Llama 2 70B, as well as GPT3.5, on most benchmarks.

Why does this matter?

Mixtral 8x7B outperforms bigger counterparts like Llama 2 70B and matches/exceeds GPT3.5 by maintaining the speed and cost of a 12B model. It is a leap forward in AI model efficiency and capability.

Source

Berkeley Research’s real-world humanoid locomotion

Berkeley Research has released a new paper that discusses a learning-based approach for humanoid locomotion, which has the potential to address labor shortages, assist the elderly, and explore new planets. The controller used is a Transformer model that predicts future actions based on past observations and actions.

Berkeley Research’s real-world humanoid locomotion
Berkeley Research’s real-world humanoid locomotion

The model is trained using large-scale reinforcement learning in simulation, allowing for parallel training across multiple GPUs and thousands of environments.

Why does this matter?

Berkeley Research’s novel approach to humanoid locomotion will help with vast real-world implications. This innovation holds promise for addressing labor shortages, aiding the elderly, and much more.

Source

 OpenAI says it is investigating reports ChatGPT has become ‘lazy’

  • OpenAI acknowledges user complaints that ChatGPT seems “lazy,” providing incomplete answers or refusing tasks.
  • Users speculate that OpenAI might have altered ChatGPT to be more efficient and reduce computing costs.
  • Despite user concerns, OpenAI confirms no recent changes to ChatGPT and is investigating the unpredictable behavior.
  • Source

👀 Grok AI was caught plagiarizing ChatGPT

  • Elon Musk’s new AI, Grok, had a problematic launch with reports of it mimicking competitor ChatGPT and espousing viewpoints Musk typically opposes.
  • An xAI engineer explained that Grok inadvertently learned from ChatGPT’s output on the web, resulting in some overlapping behaviors.
  • The company recognized the issue as rare and promised that future versions of Grok will not repeat the error, denying any use of OpenAI’s code.
  • Source

What Else Is Happening in AI on December 11th,  2023

🤝 OpenAI connects with Rishi Jaitly, former head of Twitter India, to engage with Indian government on AI regulations

OpenAI has enlisted the help of former Twitter India head Rishi Jaitly as a senior advisor to facilitate discussions with the Indian government on AI policy. OpenAI is also looking to establish a local team in India. Jaitly has been assisting OpenAI in navigating the Indian policy and regulatory landscape. (Link)

🌐 EU Strikes a deal to regulate ChatGPT

The European Union has reached a provisional deal on landmark rules governing the use of AI. The deal includes regulations on the use of AI in biometric surveillance and the regulation of AI systems like ChatGPT. (Link)

💻 Microsoft is reportedly planning to release Windows 12 in the 2nd half of 2024

This update, codenamed “Hudson Valley,” will strongly focus on AI and is currently being tested in the Windows Insider Canary channel. Key features of Hudson Valley include an AI-driven Windows Shell and an advanced AI assistant called Copilot, which will improve functions such as search, application launches, and workflow management. (Link)

💬 Google’s Gemini received mixed reviews after a demo video went viral

However, it was later revealed that the video was faked, using carefully selected text prompts and still images to misrepresent the model’s capabilities. While Gemini can generate the responses shown in the video, viewers were misled about the speed, accuracy, and mode of interaction. (Link)

💰 Seattle’s biotech hub secures $75M from tech billionaires to advance ‘DNA typewriter’ tech

Seattle’s biotech hub, funded with $75M from the Chan-Zuckerberg Initiative and the Allen Institute, is researching “DNA typewriters” that could revolutionize our understanding of biology. The technology involves using DNA as a storage medium for information, allowing researchers to track a cell’s experiences over time. (Link)

How to Find any public GPT by using Boolean search?

Find any public GPT by using Boolean search.
How to Find any public GPT by using Boolean search?

Below is a method to find ALL the public GPTs. You can use Boolean methodology to search any GPT.

Example Boolean string to paste in google (this includes ever single gpt that is public) : site:*.openai.com/g

https://www.google.com/search?q=site%3A*.openai.com%2Fg&client=ms-android-rogers-ca-revc&sca_esv=589753901&sxsrf=AM9HkKkxFkjfrp6tNAxlrULBTuworBNyGw%3A1702294645733&ei=dfR2ZcqsLKaj0PEPo9i-cA&oq=site%3A*.openai.com%2Fg&gs_lp=EhNtb2JpbGUtZ3dzLXdpei1zZXJwIhNzaXRlOioub3BlbmFpLmNvbS9nSKIYUNIOWNsVcAB4AJABAJgBdqAB2QWqAQM2LjK4AQPIAQD4AQHiAwQYASBBiAYB&sclient=mobile-gws-wiz-serp#ip=1


Let’s say you want to search for something, just modify the word Canada in the following string to whatever you want. You can add words as long as they are separated by Boolean operators (OR, AND, etc)

site:*.openai.com/g “canada”

https://www.google.com/search?q=site%3A*.openai.com%2Fg+%22canada%22&client=ms-android-rogers-ca-revc&sca_esv=589753901&sxsrf=AM9HkKkxFkjfrp6tNAxlrULBTuworBNyGw%3A1702294645733&ei=dfR2ZcqsLKaj0PEPo9i-cA&oq=site%3A*.openai.com%2Fg+%22canada%22&gs_lp=EhNtb2JpbGUtZ3dzLXdpei1zZXJwIhxzaXRlOioub3BlbmFpLmNvbS9nICJjYW5hZGEiSNBWULZGWNtUcAN4AJABAJgBgAGgAYQCqgEDMi4xuAEDyAEA-AEB4gMEGAAgQYgGAQ&sclient=mobile-gws-wiz-serp#sbfbu=1&pi=site:*.openai.com/g%20%22canada%22


And for something more complex:

site:*.openai.com/g French AND (Translate OR Translator OR Traducteur OR Traduction)

https://www.google.com/search?q=site%3A*.openai.com%2Fg+French+AND+%28Translate+OR+Translator+OR+Traducteur+OR+Traduction%29&client=ms-android-rogers-ca-revc&sca_esv=589766361&sxsrf=AM9HkKnEdv6x8x3DuRZARszur2KP6nz00w%3A1702296737764&ei=ofx2Zd-jLoelptQPztqbwA0&oq=site%3A*.openai.com%2Fg+French+AND+%28Translate+OR+Translator+OR+Traducteur+OR+Traduction%29&gs_lp=EhNtb2JpbGUtZ3dzLXdpei1zZXJwIlRzaXRlOioub3BlbmFpLmNvbS9nIEZyZW5jaCBBTkQgKFRyYW5zbGF0ZSBPUiBUcmFuc2xhdG9yIE9SIFRyYWR1Y3RldXIgT1IgVHJhZHVjdGlvbilItqIEUMUMWKqiBHAheACQAQOYAfoDoAGKWaoBCzc0LjMwLjQuNS0xuAEDyAEA-AEB4gMEGAEgQYgGAQ&sclient=mobile-gws-wiz-serp


You could even use this methodology to build a GPT that searches for GPTs.

I’m honestly surprised not more people know about Boolean searching.

A Daily Chronicle of AI Innovations in December 2023 – Day 09-10: AI Daily News – December 10th, 2023

🤖 EU agrees ‘historic’ deal with world’s first laws to regulate AI

🤔 Senior OpenAI employees claimed Sam Altman was ‘psychologically abusive’

🙅‍♀️ Apple has seemingly found a way to block Android’s new iMessage app

🤖 EU agrees ‘historic’ deal with world’s first laws to regulate AI

  • European negotiators have agreed on a historic deal to regulate artificial intelligence after intense discussions.
  • The new laws, set to take effect no earlier than 2025, include a tiered risk-based system for AI regulation and provisions for AI-driven surveillance, with strict restrictions and exceptions for law enforcement.
  • Though the agreement still requires approval from the European Parliament and member states, it signifies a significant move towards governing AI in the western world.
  • Source

 Senior OpenAI employees claimed Sam Altman was ‘psychologically abusive’

  • Senior OpenAI employees accused CEO Sam Altman of being “psychologically abusive,” causing chaos, and pitting employees against each other, leading to his temporary dismissal.
  • Allegations also included Altman misleading the board to oust board member Helen Toner, and concerns about his honesty and management style prompted a board review.
  • Despite these issues, Altman was reinstated as CEO following a demand by the senior leadership team and the resignation of most board members, including co-founder Ilya Sutskever, who later expressed regret over his involvement in the ousting.
  • Source

 Apple has seemingly found a way to block Android’s new iMessage app

  • Apple has stopped Beeper, a service that enabled iMessage-like features on Android, and faced no EU regulatory action.
  • Efforts by Nothing and Beeper to bring iMessage to Android failed due to security issues and Apple’s intervention.
  • Apple plans to support RCS messaging next year, improving Android-to-iPhone messages without using iMessage.
  • Source

🧬 CRISPR-based gene editing therapy approved by the FDA for the first time

  • The FDA approved two new sickle cell disease treatments, including the first-ever CRISPR genome editing therapy, Casgevy, for patients 12 and older.
  • Casgevy utilizes CRISPR/Cas9 technology to edit patients’ stem cells, which are then reinfused after a chemotherapy process to create healthy blood cells.
  • These groundbreaking treatments show promising results, with significant reductions in severe pain episodes for up to 24 months in clinical studies.
  • Source

The FTC is scrutinizing Microsoft’s $13 billion investment in OpenAI for potential antitrust issues, alongside UK’s CMA concerns regarding market dominance. Source

Mistral AI disrupts traditional release strategies by unexpectedly launching their new open source LLM via torrent, sparking considerable community excitement. Source

A Daily Chronicle of AI Innovations in December 2023 – Day 8: AI Daily News – December 08th, 2023

🌟 Stability AI reveals StableLM Zephyr 3B, 60% smaller yet accurate
🦙 Meta launches Purple Llama for Safe AI development
👤 Meta released an update to Codec Avatars with lifelike animated faces

Stability AI reveals StableLM Zephyr 3B, 60% smaller yet accurate

StableLM Zephyr 3B is a new addition to StableLM, a series of lightweight Large Language Models (LLMs). It is a 3 billion parameter model that is 60% smaller than 7B models, making it suitable for edge devices without high-end hardware. The model has been trained on various instruction datasets and optimized using the Direct Preference Optimization (DPO) algorithm.

It generates contextually relevant and accurate text well, surpassing larger models in similar use cases. StableLM Zephyr 3B can be used for a wide range of linguistic tasks, from Q&A-type tasks to content personalization, while maintaining its efficiency.

Why does this matter?

Tested on platforms like MT Bench and AlpacaEval, StableLM Zephyr 3B shows it can create text that makes sense, fits the context, and is linguistically accurate. In these tests, it competes well with bigger models like Falcon-4b-Instruct, WizardLM-13B-v1, Llama-2-70b-chat, and Claude-V1.

Source

Meta launches Purple Llama for Safe AI development

Meta has announced the launch of Purple Llama, an umbrella project aimed at promoting the safe and responsible development of AI models. Purple Llama will provide tools and evaluations for cybersecurity and input/output safeguards. The project aims to address risks associated with generative AI models by taking a collaborative approach known as purple teaming, which combines offensive (red team) and defensive (blue team) strategies.

The cybersecurity tools will help reduce the frequency of insecure code suggestions and make it harder for AI models to generate malicious code. The input/output safeguards include an openly available foundational model called Llama Guard to filter potentially risky outputs.

This model has been trained on a mix of publicly available datasets to enable the detection of common types of potentially risky or violating content that may be relevant to a number of developer use cases. Meta is working with numerous partners to create an open ecosystem for responsible AI development.

Why does this matter?

Meta’s strategic shift toward AI underscores its commitment to ethical AI. Their collaborative approach to building a responsible AI environment emphasizes the importance of enhancing AI safety, which is crucial in today’s rapidly evolving tech landscape.

Source

Meta released an update to Codec Avatars with lifelike animated faces

Meta Research’s work presents Relightable Gaussian Codec Avatars, a method to create high-quality animated head avatars with realistic lighting and expressions. The avatars capture fine details like hair strands and pores using a 3D Gaussian geometry model. A novel relightable appearance model allows for real-time relighting with all-frequency reflections.

The avatars also have improved eye reflections and explicit gaze control. The method outperforms existing approaches without sacrificing real-time performance. The avatars can be rendered in real-time from any viewpoint in VR and support interactive point light control and relighting in natural illumination.

Why does this matter?

With the help of Codec Avatars soon, this technology will enable us to communicate with someone as if they were sitting across from us, even if they’re miles apart. Also, This leads to incredibly detailed real-time avatars, precise down to individual hair strands!

Source

Nudify Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity

  • Apps and websites that use artificial intelligence to undress women in photos are gaining popularity, with millions of people visiting these sites.

  • The rise in popularity is due to the release of open source diffusion models that create realistic deepfake images.

  • These apps are part of the concerning trend of non-consensual pornography, as the images are often taken from social media without consent.

  • Privacy experts are worried that advances in AI technology have made deepfake software more accessible and effective.

  • There is currently no federal law banning the creation of deepfake pornography.

Source : https://time.com/6344068/nudify-apps-undress-photos-women-artificial-intelligence/

What Else Is Happening in AI on December 08th, 2023

🤑 AMD predicts the market for its data center AI processors will reach $45B

An increase from its previous estimate of $30B, the company also announced the launch of 2 new AI data center chips from its MI300 lineup, one for generative AI applications and another for supercomputers. AMD expects to generate $2B in sales from these chips by 2024. (Link)

📱 Inflection AI’s Pi is now available on Android!

The Android app is available in 35 countries and offers text and hands-free calling features. Pi can be accessed through WhatsApp, Facebook Messenger, Instagram DM, and Telegram. The app also introduces new features like back-and-forth conversations and the ability to choose from 6 different voices. (Link)

🚀 X started rolling Grok to X premium users in the US

Grok uses a generative model called Grok-1, trained on web data and feedback from human assistants. It can also incorporate real-time data from X posts, giving it an advantage over other chatbots in providing up-to-date information. (Link)

🎨 Google Chrome could soon let you use AI to create a personalized theme

The latest version of Google Chrome Canary includes a new option called ‘Create a theme with AI’, which replaces the ‘Wallpaper search’ option. An ‘Expanded theme gallery’ option will also be available, offering advanced wallpaper search options. (Link)

🖼️ Pimento uses AI to turn creative briefs into visual mood boards

French startup Pimento has raised $3.2M for its gen AI tool that helps creative teams with ideation, brainstorming, and moodboarding. The tool allows users to compile a reference document with images, text, and colors that will inspire and guide their projects. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 7: AI Daily News – December 07th, 2023

🚀 Google launches Gemini, its largest, most capable model yet
📱 Meta’s new image AI and core AI experiences across its apps family
🛠️ Apple quietly releases a framework, MLX, to build foundation models

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled - Master GPT-4, Gemini, Generative AI, LLMs: A simplified Guide For Everyday Users
AI Unraveled – Master GPT-4, Gemini, Generative AI, LLMs: A simplified Guide For Everyday Users

Google launches Gemini, its largest, most capable model yet

It looks like ChatGPT’s ultimate competitor is here. After much anticipation, Google has launched Gemini, its most capable and general model yet. Here’s everything you need to know:

  • Built from the ground up to be multimodal, it can generalize and understand, operate across and combine different types of information, including text, code, audio, image, and video. (Check out this incredible demo)
  • Its first version, Gemini 1.0, is optimized for different sizes: Ultra for highly complex tasks, Pro for scaling across a wide range of tasks, and Nano as the most efficient model for on-device tasks.
  • Gemini Ultra’s performance exceeds current SoTA results on 30 of the 32 widely-used academic benchmarks used in LLM R&D.
  • With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU.

  • It has next-gen capabilities– sophisticated reasoning, advanced math and coding, and more.
  • Gemini 1.0 is now rolling out across a range of Google products and platforms– Pro in Bard (Bard will now be better and more usable), Nano on Pixel, and Ultra will be rolling out early next year.

Why does this matter?

Gemini outperforms GPT-4 on a range of multimodal benchmarks, including text and coding. Gemini Pro outperforms GPT-3.5 on 6/8 benchmarks, making it the most powerful free chatbot out there today. It highlights Gemini’s native multimodality that can threaten OpenAI’s dominance and indicate early signs of Gemini’s more complex reasoning abilities.

However, the true test of Gemini’s capabilities will come from everyday users. We’ll have to wait and see if it helps Google catch up to OpenAI and Microsoft in the race to build great generative AI.

Source

Meta’s new image AI and core AI experiences across its apps family

Meta is rolling out a new, standalone generative AI experience on the web, Imagine with Meta, that creates images from natural language text prompts. It is powered by Meta’s Emu and creates 4 high-resolution images per prompt. It’s free to use (at least for now) for users in the U.S. It is also rolling out invisible watermarking to it.

Meta is also testing more than 20 new ways generative AI can improve your experiences across its family of apps– spanning search, social discovery, ads, business messaging, and more. For instance, it is adding new features to the messaging experience while also leveraging it behind the scenes to power smart capabilities.

Another instance, it is testing ways to easily create and share AI-generated images on Facebook.

Why does this matter?

Meta has been at the forefront of AI research which will help unlock new capabilities in its products over time, akin to other Big Techs. And while it still just scratching the surface of what AI can do, it is continually listen to people’s feedback and improving.

Source

Apple quietly releases a framework to build foundation models

Apple’s ML research team released MLX, a machine learning framework where developers can build models that run efficiently on Apple Silicon and deep learning model library MLX Data. Both are accessible through open-source repositories like GitHub and PyPI.

MLX is intended to be easy to use for developers but has enough power to train AI models like Meta’s Llama and Stable Diffusion. The video is a Llama v1 7B model implemented in MLX and running on an M2 Ultra.

Why does this matter?

Frameworks and model libraries help power many of the AI apps in the market now. And Apple, thought seen as conservative, has joined the fray with frameworks and model libraries tailored for its chips, potentially enabling generative AI applications on MacBooks. With MLX, you can:

  • Train a Transformer LM or fine-tune with LoRA
  • Text generation with Mistral
  • Image generation with Stable Diffusion
  • Speech recognition with Whisper

Source

What Else Is Happening in AI on December 07th, 2023

💻Google unveils AlphaCode 2, powered by Gemini.

It is an improved version of the code-generating AlphaCode introduced by Google’s DeepMind lab roughly a year ago. In a subset of programming competitions hosted on Codeforces, a platform for programming contests, AlphaCode 2– coding in languages Python, Java, C++, and Go– performed better than an estimated 85% of competitors. (Link)

☁️Google announces the Cloud TPU v5p, its most powerful AI accelerator yet.

With Gemini’s launch, Google also launched an updated version of its Cloud TPU v5e, which launched into general availability earlier this year. A v5p pod consists of a total of 8,960 chips and is backed by Google’s fastest interconnect yet, with up to 4,800 Gpbs per chip. Google observed 2X speedups for LLM training workloads using TPU v5p vs. v4. (Link)

🚀AMD’s Instinct MI300 AI chips to challenge Nvidia; backed by Microsoft, Dell, And HPE.

The chips– which are also getting support from Lenovo, Supermicro, and Oracle– represent AMD’s biggest challenge yet to Nvidia’s AI computing dominance. It claims that the MI300X GPUs, which are available in systems now, come with better memory and AI inference capabilities than Nvidia’s H100. (Link)

🍟McDonald’s will use Google AI to make sure your fries are fresh, or something?

McDonald’s is partnering with Google to deploy generative AI beginning in 2024 and will be able to use GenAI on massive amounts of data to optimize operations. At least one outcome will be– according to the company– “hotter, fresher food” for customers. While that’s unclear, we can expect more AI-driven automation at the drive-throughs. (Link)

🔒Gmail gets a powerful AI update to fight spam with the ‘RETVec’ feature.

The update, known as RETVec (Resilient and Efficient Text Vectorizer), helps make text classifiers more efficient and robust. It works conveniently across all languages and characters. Google has made it open-source, allowing developers to use its capabilities to invent resilient and efficient text classifiers for server-side and on-device applications. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 6: AI Daily News – December 06th, 2023

🎉 Microsoft Copilot celebrates the first year with significant new innovations
🔍 Bing’s new “Deep Search” finds deeper, relevant results for complex queries
🧠 DeepMind’s new way for AI to learn from humans in real-time

Microsoft Copilot celebrates the first year with significant new innovations

Celebrating the first year of Microsoft Copilot, Microsoft announced several new features that are beginning to roll out:

  • GPT-4 Turbo is coming soon to Copilot: It will be able to generate responses using GPT-4 Turbo, enabling it to take in more “data” with 128K context window. This will allow Copilot to better understand queries and offer better responses.
  • New DALL-E 3 Model: You can now use Copilot to create images that are even higher quality and more accurate to the prompt with an improved DALL-E 3 model. Here’s a comparison.
Microsoft Copilot celebrates the first year with significant new innovations
Microsoft Copilot celebrates the first year with significant new innovations
  • Multi-Modal with Search Grounding: Combining the power of GPT-4 with vision with Bing image search and web search data to deliver better image understanding for your queries. The results are pretty impressive.
  • Code Interpreter: A new capability that will enable you to perform complex tasks such as more accurate calculation, coding, data analysis, visualization, math, and more.
  • Video understanding and Q&A– Copilot in Edge: Summarize or ask questions about a video that you are watching in Edge.

  • Inline Compose with rewrite menu: With Copilot, Microsoft Edge users can easily write from most websites. Just select the text you want to change and ask Copilot to rewrite it for you.
  • Deep Search in Bing (more about it in the next section)

All features will be widely available soon.

Why does this matter?

Microsoft seems committed to bringing more innovation and advanced capabilities to Copilot. It is also capitalizing on its close partnership with OpenAI and making OpenAI’s advancements accessible with Copilot, paving the way for more inclusive and impactful AI utilization.

Source

Bing’s new “Deep Search” finds deeper, relevant results for complex queries

Microsoft is introducing Deep Search in Bing to provide more relevant and comprehensive answers to the most complex search queries. It uses GPT-4 to expand a search query into a more comprehensive description of what an ideal set of results should include. This helps capture intent and expectations more accurately and clearly.

Bing then goes much deeper into the web, pulling back relevant results that often don’t show up in typical search results. This takes more time than normal search, but Deep Search is not meant for every query or every user. It’s designed for complex questions that require more than a simple answer.

Deep Search is an optional feature and not a replacement for Bing’s existing web search, but an enhancement that offers the option for a deeper and richer exploration of the web.

Why does this matter?

This may be one of the most important advances in search this year. It should be less of a struggle to find answers to complex, nuanced, or specific questions. Let’s see if it steals some traffic from Google, but it also seems similar to the Copilot search feature powered by GPT-4 in the Perplexity Pro plan.

Source

DeepMind’s new way for AI to learn from humans in real-time

Google DeepMind has developed a new way for AI agents to learn from humans in a rich 3D physical simulation. This allows for robust real-time “cultural transmission” (a form of social learning) without needing large datasets.

The system uses deep reinforcement learning combined with memory, attention mechanisms, and automatic curriculum learning to achieve strong performance. Tests show that it can generalize across a wide task space, recall demos with high fidelity when the expert drops out, and closely match human trajectories with goals.

Why does this matter?

This can be a stepping stone towards how AI systems accumulate knowledge and intelligence over time, just like humans. It is crucial for many real-world applications, from construction sites to household robots, where human data collection is costly, the tasks have inherent variation, and privacy is at a premium.

Source

BREAKING: Google just released its ChatGPT Killer

Source

It’s called Gemini and here’s everything you need to know:

• It’s Google’s biggest and most powerful AI model
• It can take inputs in text, code, audio, image and video
• It comes in 3 sizes: Ultra Pro and Nano to function across a broad range of devices including smartphones
• It looks like it could potentially beat OpenAI’s GPT-4 and ChatGPT as it tops 30 of 32 AI AI model performance benchmarks.

State-of-the-art performance

We’ve been rigorously testing our Gemini models and evaluating their performance on a wide variety of tasks. From natural image, audio and video understanding to mathematical reasoning, Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

Our new benchmark approach to MMLU enables Gemini to use its reasoning capabilities to think more carefully before answering difficult questions, leading to significant improvements over just using its first impression.

A chart showing Gemini Ultra’s performance on common text benchmarks, compared to GPT-4 (API numbers calculated where reported numbers were missing).

Gemini surpasses state-of-the-art performance on a range of benchmarks including text and coding.

Gemini Ultra also achieves a state-of-the-art score of 59.4% on the new MMMU benchmark, which consists of multimodal tasks spanning different domains requiring deliberate reasoning.

With the image benchmarks we tested, Gemini Ultra outperformed previous state-of-the-art models, without assistance from object character recognition (OCR) systems that extract text from images for further processing. These benchmarks highlight Gemini’s native multimodality and indicate early signs of Gemini’s more complex reasoning abilities.

See more details in our Gemini technical report.

A chart showing Gemini Ultra’s performance on multimodal benchmarks compared to GPT-4V, with previous SOTA models listed in places where capabilities are not supported in GPT-4V.

Gemini surpasses state-of-the-art performance on a range of multimodal benchmarks.

Gemini is better than chatgpt-4 on sixteen different benchmarks

Factual accuracy: Up to 20% improvement

Reasoning and problem-solving: Up to 30% improvement

Creativity and expressive language: Up to 15% improvement

Safety and ethics: Up to 10% improvement

Multimodal learning: Up to 25% improvement

Zero-shot learning: Up to 35% improvement

Few-shot learning: Up to 40% improvement

Language modeling: Up to 15% improvement

Machine translation: Up to 20% improvement

Text summarization: Up to 18% improvement

Personalization: Up to 22% improvement

Accessibility: Up to 25% improvement

Explainability: Up to 17% improvement

Speed: Up to 28% improvement

Scalability: Up to 33% improvement

Energy efficiency: Up to 21% improvement

Google’s Gemini AI model is coming to the Pixel 8 Pro — and eventually to Android
With Gemini Nano, Google is bringing its LLM to its flagship phone and plans to make it available across the Android ecosystem through the new AICore service.

Gemini Nano is a native, local-first version of Google’s new large language model, meant to make your device smarter and faster without needing an internet connection.

Gemini may be the biggest, most powerful large language model, or LLM, Google has ever developed, but it’s better suited to running in data centers than on your phone. With Gemini Nano, though, the company is trying to split the difference: it built a reduced version of its flagship LLM that can run locally and offline on your device. Well, a device, anyway. The Pixel 8 Pro is the only Nano-compatible phone so far, but Google sees the new model as a core part of Android going forward.

If you have a Pixel 8 Pro, starting today, two things on your phone will be powered by Gemini Nano: the auto-summarization feature in the Recorder app, and the Smart Reply part of the Gboard keyboard. Both are coming as part of the Pixel’s December Feature Drop. Both work offline since the model is running on the device itself, so they should feel fast and native.

Google is starting out quite small with Gemini Nano. Even the Smart Reply feature is only Gemini-powered in WhatsApp, though Google says it’s coming to more apps next year. And Gemini as a whole is only rolling out in English right now, which means many users won’t be able to use it at all. Your Pixel 8 Pro won’t suddenly feel like a massively upgraded device — though it might over time, if Gemini is as good as Google thinks it can be. And next year, when Google brings a Gemini-powered Bard to Assistant on Pixel phones, you’ll get even more of the Gemini experience.

Nano is the smallest (duh) of the Gemini models, but Demis Hassabis, the CEO of Google DeepMind, says it still packs a punch. “It has to fit on a footprint, right?” he says. “The very small footprint of a Pixel phone. So there’s memory constraints, speed constraints, all sorts of things. It’s actually an incredible model for its size — and obviously it can benefit from the bigger models by distilling from them and that sort of thing.” The goal for Nano was to create a version of Gemini that is as capable as possible without eating your phone’s storage or heating the processor to the temperature of the sun.

Google is also working on a way to build Nano into Android as a whole

Right now, Google’s Tensor 3 processor seems to be the only one capable of running the model. But Google is also working on a way to build Nano into Android as a whole: it launched a new system service called AICore that developers can use to bring Gemini-powered features into their apps. Your phone will still need a pretty high-end chip to make it work, but Google’s blog post announcing the feature mentions Qualcomm, Samsung, and MediaTek as companies making compatible processors. Developers can get into Google’s early access program now.

For the last couple of years, Google has talked about its Pixel phones as essentially AI devices. With Tensor chips and close connection to all of Google’s services, they’re supposed to get better and smarter over time. With Gemini Nano, that could eventually become true for lots of high-end Android devices. For now, it’s just a good reason to splurge on the Pixel 8 Pro.

Klarna freezes hiring because AI can do the job instead

  • Klarna CEO Sebastian Siemiatkowski has implemented a hiring freeze, anticipating that AI advancements will allow technology to perform tasks previously done by humans.
  • Despite recently achieving its first quarterly profit in four years and planning for an IPO, Klarna is not recruiting new staff, with Siemiatkowski citing AI’s ability to streamline operations and reduce the need for human labor.
  • The company, which employs over 5,000 people, is already using AI tools to analyze customer service records and automate order disputes.
  • Source

Meta and IBM form open-source alliance to counter big AI players

  • Meta and IBM have formed the AI Alliance with 50 companies, universities, and other entities to promote responsible, open-sourced AI, positioning themselves as competitors to OpenAI and other leaders in the AI industry.
  • The alliance includes major open-sourced AI models like Llama2, Stable Diffusion, StarCoder, and Bloom, and features notable members such as Hugging Face, Intel, AMD, and various educational institutions.
  • Their goals include advancing open foundation models, developing tools for responsible AI development, fostering AI hardware acceleration, and educating the public and regulators about AI’s risks and benefits.
  • Source

A Daily Chronicle of AI Innovations in December 2023 – Day 5: AI Daily News – December 05th, 2023

🤝 Runway partners with Getty Images to build enterprise AI tools
⚛️ IBM introduces next-gen Quantum Processor & Quantum System Two
📱 Microsoft’s ‘Seeing AI App’ now on Android with 18 languages

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled - Mastering GPT-4: Simplified Guide For everyday Users: Demystifying Artificial Intelligence - OpenAI, ChatGPT, Google Bard, Generative AI Quiz, LLMs, Machine Learning, NLP, GPT-4, Q*
AI Unraveled – Mastering GPT-4: Simplified Guide For everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, Generative AI Quiz, LLMs, Machine Learning, NLP, GPT-4, Q*
AI Unraveled: Master GPT-4, Generative AI, Pass AI Certifications, LLMs Quiz
AI Unraveled: Master GPT-4, Generative AI, Pass AI Certifications, LLMs Quiz

Runway partners with Getty Images to build enterprise AI tools

Runway is partnering with Getty Images to develop AI tools for enterprise customers. This collaboration will result in a new video model that combines Runway’s technology with Getty Images’ licensed creative content library.

This model will allow companies to create HQ-customized video content by fine-tuning the baseline model with their own proprietary datasets.  It will be available for commercial use in the coming months. RunwayML currently has a waiting list.

Why does this matter?

This partnership aims to enhance creative capabilities in various industries, such as Hollywood studios, advertising, media, and broadcasting. The new AI tools will provide enterprises with greater creative control and customization, making it easier to produce professional, engaging, and brand-aligned video content.

IBM introduces next-gen Quantum Processor & Quantum System Two

IBM introduces Next-Generation Quantum Processor & IBM Quantum System Two. This next-generation Quantum Processor is called IBM Quantum Heron, which offers a five-fold improvement in error reduction compared to its predecessor.

IBM Quantum System Two is the first modular quantum computer, which has begun operations with three IBM Heron processors.

IBM has extended its Quantum Development Roadmap to 2033, with a focus on improving gate operations to scale with quality towards advanced error-corrected systems.

Additionally, IBM announced Qiskit 1.0, the world’s most widely used open-source quantum programming software, and showcased generative AI models designed to automate quantum code development and optimize quantum circuits.

Why does this matter?

Jay Gambetta, VP of IBM, said, “This is a significant step towards broadening how quantum computing can be accessed and put in the hands of users as an instrument for scientific exploration.”

Also, with advanced hardware across easy-to-use software that IBM is debuting in Qiskit, users and computational scientists can now obtain reliable results from quantum systems as they map increasingly larger and more complex problems to quantum circuits.

Microsoft’s ‘Seeing AI App’ now on Android with 18 languages

Microsoft has launched the Seeing AI app on Android, offering new features and languages. The app, which narrates the world for blind and low-vision individuals, is now available in 18 languages, with plans to expand to 36 by 2024.

Microsoft’s ‘Seeing AI App’ now on Android with 18 languages
Microsoft’s ‘Seeing AI App’ now on Android with 18 languages

The Android version includes new generative AI features, such as richer descriptions of photos and the ability to chat with the app about documents. Seeing AI allows users to point their camera or take a photo to hear a description and offers various channels for specific information, such as text, documents, products, scenes, and more.

You can Download Android Seeing AI from the Play Store and the  iOS from the App Store.

Why does this matter?

There are over 3B active Android users worldwide, and bringing Seeing AI to this platform will provide so many more people in the blind and low vision community the ability to utilize this technology in their everyday lives.

Source

What Else Is Happening in AI on December 05th, 2023

 Owner of TikTok set to launch the ‘AI Chatbot Development Platform’

TikTok owner ByteDance is set to launch an open platform for users to create their own chatbots as the company aims to catch up in the generative AI market. The “bot development platform” will be launched as a public beta by the end of the month. (Link)

 Samsung is set to launch its AI-powered Galaxy Book 4 notebooks on Dec 15

The laptops will feature Intel’s next-gen SoC with a built-in Neural Processing Unit (NPU) for on-device AI and Samsung’s in-house gen AI model, Gauss. Gauss includes a language model, coding assistant, and image model. (Link)

 NVIDIA to build AI Ecosystem in Japan, partners with companies & startups

NVIDIA plans to set up an AI research laboratory and invest in local startups to foster the development of AI technology in the country. They also aim to educate the public on using AI and its potential impact on various industries and everyday life. (Link)

 Singapore plans to triple its AI workforce to 15K

By training locals and hiring from overseas, according to Deputy Prime Minister Lawrence Wong. The city-state aims to fully leverage AI’s capabilities to improve lives while also building a responsible and trusted ecosystem. Singapore’s revised AI strategy focuses on developing data, ML scientists, and engineers as the backbone of AI. (Link)

 IIT Bombay joins Meta & IBM’s AI Alliance group for AI open-source development

The alliance includes over 50 companies and organizations like Intel, Oracle, AMD, and CERN. The AI Alliance aims to advance the ecosystem of open foundation models, including multilingual, multi-modal, and science models that can address societal challenges. (Link)

A Daily Chronicle of AI Innovations in December 2023 – Day 4: AI Daily News – December 04th, 2023

🧠 Meta’s Audiobox advances controllability for AI audio
📁 Mozilla lets you turn LLMs into single-file executables
🚀 Alibaba’s Animate Anyone may be the next breakthrough in AI animation

🤔 OpenAI committed to buying $51 million of AI chips from startup… backed by CEO Sam Altman

🤖 ChatGPT is writing legislation now

🚫 Google reveals the next step in its war on ad blockers: slower extension updates

🧬 AstraZeneca ties up with AI biologics company to develop cancer drug

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled: Demystifying Artificial Intelligence
AI Unraveled: Demystifying Artificial Intelligence

Amazon’s AI Reportedly Suffering “Severe Hallucinations” and “Leaking Confidential Data”

Amazon’s Q has ‘severe hallucinations’ and leaks confidential data in public preview, employees warn. Some hallucinations could ‘potentially induce cardiac incidents in Legal,’ according to internal documents

What happened:

  • Three days after Amazon announced its AI chatbot Q, some employees are sounding alarms about accuracy and privacy issues. Q is “experiencing severe hallucinations and leaking confidential data,” including the location of AWS data centers, internal discount programs, and unreleased features, according to leaked documents obtained by Platformer.

  • An employee marked the incident as “sev 2,” meaning an incident bad enough to warrant paging engineers at night and make them work through the weekend to fix it.

But Amazon played down the significance of the employee discussions (obviously):

  • “Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon,” a spokesperson said. “No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available.”

Source (Platformer and Futurism)

Meta’s Audiobox advances controllability for AI audio

Audiobox is Meta’s new foundation research model for audio generation. The successor to Voicebox, it is advancing generative AI for audio further by unifying generation and editing capabilities for speech, sound effects (short, discrete sounds like a dog bark, car horn, a crack of thunder, etc.), and soundscapes, using a variety of input mechanisms to maximize controllability.

Meta’s Audiobox advances controllability for AI audio
Meta’s Audiobox advances controllability for AI audio

Most notably, Audiobox lets you use natural language prompts to describe a sound or type of speech you want. You can also use it combined with voice inputs, thus making it easy to create custom audio for a wide range of use cases.

Why does this matter?

Audiobox demonstrates state-of-the-art controllability in speech and sound effects generation with AI. With it, developers can easily build a more dynamic and wide range of use cases without needing deep domain expertise. It can transform diverse media, from movies to podcasts, audiobooks, and video games.

(Source)

Mozilla lets you turn LLMs into single-file executables

LLMs for local use are usually distributed as a set of weights in a multi-gigabyte file. These cannot be directly used on their own, making them harder to distribute and run compared to other software. A given model can also have undergone changes and tweaks, leading to different results if different versions are used.

To help with that, Mozilla’s innovation group has released llamafile, an open-source method of turning a set of weights into a single binary that runs on six different OSs (macOS, Windows, Linux, FreeBSD, OpenBSD, and NetBSD) without needing to be installed. This makes it dramatically easier to distribute and run LLMs and ensures that a particular version of LLM remains consistent and reproducible forever.

Why does this matter?

This makes open-source LLMs much more accessible to both developers and end users, allowing them to run models on their own hardware easily.

Source

Alibaba’s Animate Anyone may be the next breakthrough in AI animation

Alibaba Group researchers have proposed a novel framework tailored for character animation– Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation.

Despite diffusion models’ robust generative capabilities, challenges persist in image-to-video (especially in character animation), where temporally maintaining consistency with details remains a formidable problem.

This framework leverages the power of diffusion models. To preserve the consistency of intricacies from reference images, it uses ReferenceNet to merge detail features via spatial attention. To ensure controllability and continuity, it introduces an efficient pose guider. It achieves SoTA results on benchmarks for fashion video and human dance synthesis.

Why does this matter?

This could mark the beginning of the end of TikTok and Instagram. Some inconsistencies are noticeable, but it’s more stable and consistent than earlier AI character animators. It could look scarily real if we give it some time to advance.

Source

OpenAI committed to buying $51 million of AI chips from startup… backed by CEO Sam Altman

  • OpenAI has signed a letter of intent to purchase $51 million in AI chips from Rain, a startup in which OpenAI CEO Sam Altman has personally invested over $1 million.
  • Rain, developing a neuromorphic processing unit (NPU) inspired by the human brain, faces challenges after a U.S. government body mandated a Saudi Arabia-affiliated fund to divest its stake in the company for national security reasons.
  • This situation reflects the potential conflict of interest in Altman’s dual roles as an investor and CEO of OpenAI.
  • Source

ChatGPT is writing legislation now

  • In Brazil, Porto Alegre council passed a law written by ChatGPT that prevents charging citizens for stolen water meters replacement.
  • The council members were unaware of the AI’s use in drafting the law, which was proposed using a brief prompt to ChatGPT by Councilman Rosário.
  • This event sparked discussions on the impacts of AI in legal fields, as instances of AI-generated content led to significant consequences in the United States.
  • Source

 Google reveals the next step in its war on ad blockers: slower extension updates

  • Google is targeting ad blocker developers with its upcoming Manifest V3 changes, which will slow down the update process for Chrome extensions.
  • Ad blockers might become less effective on YouTube as the new policy will delay developers from quickly adapting to YouTube’s ad system alterations.
  • Users seeking to avoid YouTube ads may have to switch to other browsers like Firefox or use OS-level ad blockers, as Chrome’s new rules will restrict ad-blocking capabilities.
  • Source

AstraZeneca ties up with AI biologics company to develop cancer drug

  • AstraZeneca has partnered with Absci Corporation in a deal worth up to $247 million to develop an antibody for cancer treatment using Absci’s AI technology for protein analysis.
  • The collaboration is part of a growing trend of pharmaceutical giants teaming with AI firms to create innovative disease treatments, aiming to improve success rates and reduce development costs.
  • This partnership is a step in AstraZeneca’s strategy to replace traditional chemotherapy with targeted drugs, following their recent advances in treatments for lung and breast cancers.
  • Source

Pinterest begins testing a ‘body type ranges’ tool to make searches more inclusive.

It will allow users to filter select searches by different body types. The feature, which will work with women’s fashion and wedding ideas at launch, builds on Pinterest’s new body type AI technology announced earlier this year. (Link)

Intel neural-chat-7b model achieves top ranking on LLM leaderboard.

At 7 billion parameters, neural-chat-7b is at the low end of today’s LLM sizes. Yet it achieved comparable accuracy scores to models 2-3x larger. So, even though it was fine-tuned using Intel Gaudi 2 AI accelerators, its small size means you can deploy it to a wide range of compute platforms. (Link)

Leonardo AI in real-time is here, with two tiers for now.

Paid get “Realtime” mode where it updates as you paint and as you move objects. Free get “Interactive” mode, where it updates at the end of a brush stroke or once you let go of an object. Paid is now live and free to go live soon. (Link)

📢 Advertise with us and Sponsorship Opportunities

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

Google has quietly pushed back the launch of next-gen AI model Gemini until next year. Source

As we step into the future of technology, sometimes the most anticipated journeys encounter detours. Google has just announced a strategic decision: the launch of its groundbreaking Gemini AI project is being pushed to early 2024. 📅

🔍 Why the Delay?

Google is committed to excellence and innovation. This delay reflects their dedication to refining Gemini AI, ensuring it meets the highest standards of performance and ethical AI use. This extra time is being invested in enhancing the AI’s capabilities and ensuring it aligns with evolving global tech norms. 🌐

🧠 What Can We Expect from Gemini AI?

Gemini AI promises to be more than just a technological marvel; it’s set to revolutionize how we interact with AI in our daily lives. From smarter assistance to advanced data analysis, the potential is limitless. 💡

📈 Impact on the Tech World

This decision by Google is a reminder that in the tech world, patience often leads to perfection. The anticipation for Gemini AI is high, and the expectations are even higher.

💬 Your Thoughts?

What are your thoughts on this strategic move by Google? How do you think the delay will impact the AI industry? Share your insights!

#GoogleGeminiAI #ArtificialIntelligence #TechNews #Innovation #FutureTech

A Daily Chronicle of AI Innovations in December 2023 – Day 2-3: AI Daily News – December 03rd, 2023

🤖 Scientists build tiny biological robots from human cells

🚗 Tesla’s Cybertruck arrives with $60,990 starting price and 250-mile range

✈️ Anduril unveils Roadrunner, “a fighter jet weapon that lands like a Falcon 9”

⚖️ Meta sues FTC to block new restrictions on monetizing kids’ data

💰 Coinbase CEO: future AI ‘agents’ will transact in crypto

🎁 + 8 other news you might like

Scientists build tiny biological robots from human cells

  • Researchers have developed miniature biological robots called Anthrobots, made from human tracheal cells, that can move and enhance neuron growth in damaged areas.
  • The Anthrobots, varying in size and movement, assemble themselves without genetic modifications and demonstrate healing effects in lab environments.
  • This innovation indicates potential for future medical applications, such as repairing neural tissue or delivering targeted therapies, using bots created from a patient’s own cells.
  • Source

 Tesla’s Cybertruck arrives with $60,990 starting price and 250-mile range

  • Tesla’s Cybertruck, after multiple delays, is now delivered at a starting price of $60,990 with a 250-mile base range.
  • The Cybertruck lineup includes a dual-motor variant for $79,990 and a tri-motor “Cyberbeast” costing $99,990 with higher performance specs.
  • The Cybertruck has introduced bi-directional charging and aims for an annual production of 250,000 units post-2024, despite initial production targets being missed due to the pandemic.
  • Source

Coinbase CEO: future AI ‘agents’ will transact in crypto

  • Coinbase CEO Brian Armstrong predicts that autonomous AI agents will use cryptocurrency for transactions, such as paying for services and information.
  • Armstrong suggests that cryptography can help verify the authenticity of content, combating the spread of fake information online.
  • The CEO foresees a synergy between crypto and AI in Coinbase’s operations and emerging technological areas like decentralized social media and payments.
  • Source

Quiz: Intro to Generative AI

What accurately defines a ‘prompt’ in the context of large language models?

Options:

A. A prompt is a short piece of text that is given to the large language model as input and can be used to control the output of the model in various ways.

B. A prompt is a long piece of text that is given to the large language model as input and cannot be used to control the output of the model.

C. A prompt is a short piece of text given to a small language model (SLM) as input and can be used to control the output of the model in various ways.

D. A prompt is a short piece of text that is given to the large language model as input and can be used to control the input of the model in various ways.

E. A prompt is a short piece of code that is given to the large language model as input and can be used to control the output of the model in various ways.

Correct Answer: A. A prompt is a short piece of text that is given to the large language model as input and can be used to control the output of the model in various ways.

Explanation: In the context of large language models, a ‘prompt’ is a concise piece of text provided as input. This input text guides or ‘prompts’ the model in generating an output. The prompt can influence the nature, tone, and direction of the model’s response, making it a critical component in controlling how the AI model interprets and responds to a query.

Options B, C, D, and E do not accurately capture the essence of what a prompt is in the context of large language models.

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4 – Generative AI Quiz – Large Language Models Quiz,” available at Shopify, Apple, Google, Etsy or Amazon:

https://shop.app/products/8623729213743

https://amzn.to/3ZrpkCu http://books.apple.com/us/book/id6445730691

https://play.google.com/store/books/details?id=oySuEAAAQBAJ

https://www.etsy.com/ca/listing/1617575707/ai-unraveled-demystifying-frequently

A Daily Chronicle of AI Innovations in December 2023 – Day 1: AI Daily News – December 01st, 2023

😎 A new technique from researchers accelerate LLMs by 300x
🌐 AI tool ‘screenshot-to-code’ generates entire code from screenshots
🤖 Microsoft Research explains why hallucination is necessary in LLMs!
🎁 Amazon is using AI to improve your holiday shopping
🧠 AI algorithms are powering the search for cells
🚀 AWS adds new languages and AI capabilities to Amazon Transcribe
💼 Amazon announces Q, an AI chatbot tailored for businesses
✨ Amazon launches 2 new chips for training + running AI models
🎥 Pika officially reveals Pika 1.0, idea-to-video platform
🖼️ Amazon’s AI image generator, and other AWS re:Invent updates
💡 Perplexity introduces PPLX online LLMs
💎 DeepMind’s AI tool finds 2.2M new crystals to advance technology
🎭 Meta’s new models make communication seamless for 100 languages
🚗 Researchers release Agent-driver, uses LLMs for autonomous driving
💳 Mastercard launches an AI service to help you find the perfect gift

This new technique accelerates LLMs by 300x

Researchers at ETH Zurich have developed a new technique UltraFastBERT, a language model that uses only 0.3% of its neurons during inference while maintaining performance. It can accelerate language models by 300 times. And by introducing “fast feedforward” layers (FFF) that use conditional matrix multiplication (CMM) instead of dense matrix multiplications (DMM), the researchers were able to significantly reduce the computational load of neural networks.

They validated their technique with FastBERT, a modified version of Google’s BERT model, and achieved impressive results on various language tasks. The researchers believe that incorporating fast feedforward networks into large language models like GPT-3 could lead to even greater acceleration.

Read the Paper here.

Amazon launches 2 new chips for training + running AI models

Amazon announces 2 new chips for training and running AI models; here are they:

1) The Trainium2 chip is designed to deliver better performance and energy efficiency than its predecessor and a cluster of 100,000 Trainium chips can train a 300-billion parameter AI language model in weeks.

2) The Graviton4 chip: The fourth generation in Amazon’s Graviton chip family, provides better compute performance, more cores, and increased memory bandwidth. These chips aim to address the shortage of GPUs in high demand for generative AI. The Trainium2 chip will be available next year, while the Graviton4 chip is currently in preview.

Source

Meta’s new AI makes communication seamless in 100 languages

Meta has developed a family of 4 AI research models called Seamless Communication, which aims to remove language barriers and enable more natural and authentic communication across languages. Here are they:

It is the first publicly available system that unlocks expressive cross-lingual communication in real-time and allows researchers to build on this work.

Try the SeamlessExpressive demo to listen how you sound in different languages.

Today, alongside their models, they are releasing metadata, data, and data alignment tools to assist the research community, including:

  • Metadata of an extension of SeamlessAlign corresponding to an additional 115,000 hours of speech and text alignments on top of the existing 470k hours.
  • Metadata of SeamlessAlignExpressive, an expressivity-focused version of the dataset above.
  • Tools to assist the research community in collecting more datasets for translation.

Source

NVIDIA researchers have integrated human-like intelligence into ADS

In this paper, the team of NVIDIA, Stanford, and USC researchers have released ‘Agent-driver,’ which integrates human-like intelligence into the driving system. It utilizes LLMs as a cognitive agent to enhance decision-making, reasoning, and planning.

Agent-Driver system includes a versatile tool library, a cognitive memory, and a reasoning engine. The system is evaluated on the nuScenes benchmark and outperforms existing driving methods significantly. It also demonstrates superior interpretability and the ability to learn with few examples. The code for this approach will be made available.

Source

Mastercard introduces Muse AI for tailored shopping

Mastercard has launched Shopping Muse, an AI-powered tool that helps consumers find the perfect gift. AI will provide personalized recommendations on a retailer’s website based on the individual consumer’s profile, intent, and affinity.

Mastercard introduces Muse AI for tailored shopping
Mastercard introduces Muse AI for tailored shopping

Shopping Muse translates consumer requests made via a chatbot into tailored product recommendations, including suggestions for coordinating products and accessories. It considers the shopper’s browsing history and past purchases to estimate future buying intent better.

Source

What Else Is Happening in AI on December 01st, 2023

 Microsoft plans to invest $3.2B in UK to drive AI progress

It will be its largest investment in the country over the next three years. The funding will support the growth of AI and Microsoft’s data center footprint in Britain. The investment comes as the UK government seeks private investment to boost infrastructure development, particularly in industries like AI. (Link)

HPE and NVIDIA extended their collaboration to enhance AI offerings

The partnership aims to enable customers to become “AI-powered businesses” by providing them with products that leverage Nvidia’s AI capabilities. The deal is expected to enhance generative AI capabilities and help users maximize the potential of AI technology. (Link)

 Voicemod now allows users to create and share their own AI voices

This AI voice-changing platform has new features including AI Voice Changer, which lets users create and customize synthetic voices with different genders, ages, and tones. (Link)

 Samsung introduces a new type of DRAM called Low Latency Wide IO (LLW)

The company claims it is perfect for mobile AI processing and gaming. It’s more efficient in processing real-time data than the LPDDR modules currently used in mobile devices. It sits next to the CPU inside the SoC and is suitable for gaming and AI applications. (Link)

 Ideogram just launched image prompting

Toronto-based AI startup Ideogram has launched its own text-to-image generator platform, competing with existing platforms like DALL-E, Midjourney, and Adobe Firefly. So now you can upload an image and control the output using visual input in addition to text. This is available to all of their Plus subscribers. (Link)

A Daily Chronicle of AI Innovations in November 2023

https://enoumen.com/2023/11/01/a-daily-chronicle-of-ai-innovations-in-november-2023/

📢 Advertise with us and Sponsorship Opportunities

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled - Mastering GPT-4: Simplified Guide For everyday Users: Demystifying Artificial Intelligence - OpenAI, ChatGPT, Google Bard, Generative AI Quiz, LLMs, Machine Learning, NLP, GPT-4, Q*
AI Unraveled – Mastering GPT-4: Simplified Guide For everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, Generative AI Quiz, LLMs, Machine Learning, NLP, GPT-4, Q*

The AI Unraveled book, explores topics like the basics of artificial intelligence, machine learning, Generative AI, GPT-4, deep learning, natural language processing, computer vision, ethics, applications in various industries.

This book aims to explore the fascinating world of artificial intelligence and provide answers to the most commonly asked questions about it. Whether you’re curious about what artificial intelligence is or how it’s transforming industries, this book will help demystify and provide a deeper understanding of this cutting-edge technology. So let’s dive right in and unravel the world of artificial intelligence together.

In Chapter 1, we’ll delve into the basics of artificial intelligence. We’ll explore what AI is, how it works, and the different types of AI that exist. Additionally, we’ll take a look at the history of AI and how it has evolved over the years. Understanding these fundamentals will set the stage for our exploration of the more advanced concepts to come.

Chapter 2 focuses on machine learning, a subset of artificial intelligence. Here, we’ll take a deeper dive into what machine learning entails, how it functions, and the various types of machine learning algorithms that are commonly used. By the end of this chapter, you’ll have a solid grasp of how machines can be trained to learn from data.

Next, in Chapter 3, we’ll explore the exciting field of deep learning. Deep learning utilizes artificial neural networks to make decisions and learn. We’ll discover what deep learning is, how it operates, and the different types of deep learning algorithms that are used to tackle complex tasks. This chapter will shed light on the powerful capabilities of deep learning within the realm of AI.

Chapter 4 introduces us to the field of natural language processing (NLP). NLP focuses on enabling machines to understand and interpret human language. We’ll explore how NLP functions, its various applications across different industries, and why it’s an essential area of study within AI.

Moving on to Chapter 5, we’ll uncover the world of computer vision. Computer vision enables machines to see and interpret visual data, expanding their understanding of the world. We’ll delve into what computer vision is, how it operates, and the ways it is being utilized in different industries. This chapter will provide insights into how machines can perceive and analyze visual information.

In Chapter 6, we’ll delve into the important topic of AI ethics and bias. While artificial intelligence has incredible potential, it also presents ethical challenges and the potential for bias. This chapter will explore the ethical implications of AI and the difficulties in preventing bias within AI systems. Understanding these issues will help facilitate responsible and fair AI development.

Chapter 7 focuses on the practical applications of artificial intelligence in various industries. We’ll explore how AI is transforming healthcare, finance, manufacturing, transportation, and more. This chapter will showcase the benefits AI brings to these sectors and highlight the challenges that need to be addressed for successful integration.

Moving into Chapter 8, we’ll examine the broader societal implications of artificial intelligence. AI has the potential to impact various aspects of our lives, from improving our quality of life to reshaping the job market. This chapter will explore how AI is changing the way we live and work, and the social implications that accompany these changes.

Chapter 9 takes us into the future of AI, where we’ll explore the trends and developments shaping this rapidly evolving field. From advancements in technology to emerging applications, this chapter will give you a glimpse of what the future holds for AI and the exciting possibilities that lie ahead.

In Chapter 10 and Chapter 11, we have some quizzes to test your knowledge. These quizzes will cover topics such as Generative AI and Large Language Models, enhancing your understanding of these specific areas within the AI landscape.

Finally, as a bonus, we have provided a section on the latest AI trends, daily AI news updates, and a simplified guide to mastering GPT-4. This section covers a wide range of topics, including the future of large language models, explainable AI, AI in various industries, and much more. It’s a treasure trove of information for AI enthusiasts.

So get ready to embark on this journey of demystifying artificial intelligence. Let’s explore the possibilities, applications, and ethical considerations of AI together.

Hey there! I’m excited to share some awesome news with you. Guess what? The fantastic book “AI Unraveled” by Etienne Noumen is finally out and ready to be devoured by curious minds like yours. And the best part? It’s available for you to get your hands on right now!

To make things super convenient, you can find this gem of a book at popular online platforms like Etsy, Shopify, Apple, Google, or Amazon. How cool is that? Whether you prefer doing your shopping on Etsy, or perhaps you’re more of an Amazon aficionado, the choice is all yours.

Now, let me hint at what you can expect from “AI Unraveled.” This book is a captivating journey into the world of artificial intelligence, offering insights, revelations, and a deep understanding of this cutting-edge technology. It’s a perfect read for anyone looking to expand their knowledge on AI, whether you’re a tech enthusiast, a student, or just someone looking to stay up-to-date on the latest trends.

So, what are you waiting for? Don’t miss out on this opportunity to dive into the world of AI with “AI Unraveled” by Etienne Noumen. Head over to your preferred online platform, grab your copy, and get ready to unmask the mysteries of artificial intelligence. Happy reading!

  • A sufficiently advanced AI is indistinguishable from God
    by /u/zehnfischer (Artificial Intelligence (AI)) on December 7, 2024 at 11:17 am

    I just had this shower thought. I have been listening to Geoffrey Hinton yesterday on how he says that super intelligences would compete with each other to organise the maximum of resources for themselves. Historically the most effective way of doing that has been to create a religion. If people believe that you are an omnipotent being that has the perfect absolute answer to every question, you are god. Would you agree? Do you see how Artificial intelligence could start a religion or better yet, take over a religion as a realistic scenario for the near future? submitted by /u/zehnfischer [link] [comments]

  • One-Minute Daily AI News 12/6/2024
    by /u/Excellent-Target-847 (Artificial Intelligence (AI)) on December 7, 2024 at 5:35 am

    OpenAI Is Working With Anduril to Supply the US Military With AI.[1] Meta unveils a new, more efficient Llama model.[2] Murdered Insurance CEO Had Deployed an AI to Automatically Deny Benefits for Sick People.[3] NYPD Ridiculed for Saying AI Will Find CEO Killer as They Fail to Name Suspect.[4] Sources: [1] https://www.wired.com/story/openai-anduril-defense/ [2] https://techcrunch.com/2024/12/06/meta-unveils-a-new-more-efficient-llama-model/ [3] https://www.yahoo.com/news/murdered-insurance-ceo-had-deployed-175638581.html [4] https://www.yahoo.com/news/nypd-ridiculed-saying-ai-ceo-191954830.html submitted by /u/Excellent-Target-847 [link] [comments]

  • Is there any AI tools where it shows what the room would look like if a wall is knocked down?
    by /u/redditIhardlyknowit (Artificial Intelligence (AI)) on December 7, 2024 at 12:30 am

    Looking for a tool, where it would be able to visualize what the room would look if we knock down a wall or open up a window. Can be based off of an existing plan, or tuned based on prompts would be fine. Would be cool to have something so we can visualize how a remodeling idea would look. submitted by /u/redditIhardlyknowit [link] [comments]

  • Defense Against the AI Dark Arts: Threat Assessment and Coalition Defense
    by /u/HooverInstitution (Artificial Intelligence (AI)) on December 6, 2024 at 11:38 pm

    submitted by /u/HooverInstitution [link] [comments]

  • Having difficulty generating the art I want. Multiple examples in post!
    by /u/natureboyandymiami (Artificial Intelligence (AI)) on December 6, 2024 at 9:56 pm

    Hello everyone, I know there's probably a post like this that comes up every single day but I'm really posting this because I'm stuck and almost completely depleted of recourses. I'm having an extremely difficult time generating the content that I want out of my prompts on multiple platforms and am in need of guidance or advice on the matter. For a little background, I'm an independant artist that recently discovered the magnificence of AI and felt extremely motivated and passionate about releasing my new project alongside an AI created shortfilm. Now the project is a little more complicated than just that but I currently can't even get past the beginning portion so I don't want to get ahead of myself and think of the future too hastily. In terms of workflow and recourses I currently have: - I am using a Macbook Pro M1 Pro Max (so not ideal for me to use a local SD engine, etc, unless there's something that I'm missing) - I have the complete adobe suite (photoshop, premiere, after effects, etc) and am fairly proficient in them. - I have a monthly subscription for Midjourney, KlingAI, Minimax, LeonardoAI. - I create my own music and sound design with Logic Pro and Splice. What i'm trying to create currently and having difficulty is a :30 second trailer for my upcoming project that in essence is of a man walking through an empty white space into a black entrance with different camera angles of the man walking and his facial expressions. What i've tried for workflow purposes: 1) Create many reference photos of the man using prompts like: "Create a 9-panel character sheet, camera angled at medium length to show the subject from the top of his head to the end of stomach, korean male, 35 years old, clean shaven face, defined jaw line, short hair cut with a high fade buzzed on the sides, black hair and black eyes, wearing a plain white longsleeve crewneck sweater and plain white pants mostly normal expression but change expressions slightly and turn head slightly throughout each panel, Evenly-spaced photo grid with deep color tone. Standing in front of a plain solid white backdrop with studio lighting. Professional full body model photography, highlighting the details of the subject." That prompt after filtering through the many outputs leads to this result: https://imgur.com/a/s9JqbFC I then sliced the references into seperate layers on photoshop and removing the background of each and altering some details that came out wonky. I then take those references and re-add them to midjourney as CREFS and create several new prompts that read like this: "side profile photo looking towards the right, of a korean man age 35, average build, around 5'10, black hair, black eyes, clean shaven, short buzzed haircut, wearing a white long-sleeve crewneck sweater and long white pants, barefoot, the man has a normal resting face. Standing in front of a plain solid white backdrop with studio lighting. Professional full body model photography, highlighting the details of the subject." That created Results like this: https://imgur.com/a/Irx5uIU I then created a prompt for the space that I wanted the man to be in so that I can eventually turn that into a video using the other services. The prompt was as follows: "cinematic birds eye superwide angle, film by George Lucas, huge empty white room with no walls, completely smooth white with no markings or ceilings and one singular small door at the very end of the white space, 35mm, 8k, ultra realistic, style of sci-fi" This was the result of that prompt: https://cdn.midjourney.com/f46c926f-bb3a-4a18-870e-b5e834f1ae67/0_3.png I tried merging the two using Crefs and Style references with a prompt but wasn't given what I wanted so I decided to photoshop what I wanted using the AI built in photoshop as well as well as the seperate entries: https://imgur.com/a/BaE00nB I then used that reference image as well as the rest of these photoshopped images (which just added sequence for image to video for services that give a start point and end point image reference): https://imgur.com/a/WAGKEgn into KlingAI, Minimax, Leonardo and Runway, Haiper, and Vidu (the last three were with free credits), these were my results: KLINGAI: https://imgur.com/a/aHgO6uc MINIMAX: https://imgur.com/a/SpYId3T RUNWAY: https://imgur.com/a/FvcDJyE HAIPERAI: https://imgur.com/a/LBO6jhV VIDUAI: https://imgur.com/a/Es3nU7e From all the generations the best were Vidu AI, although I started running into weird discoloration. All I want is for that man to walk slowly to the next picture slide (It would be ROOM 2 into ROOM 2.2). 2) So that didn't work fully so I decided to train a Lora model on Leonardo AI so I began to generate even more images of the previous character reference using more photoshopped character reference photos and the seed# for the images that I thought were appropriate. I narrowed the images down to 30 solid images of front facing, back facing, right and left side profile, full body, and even turning photos of the character reference as consistent as I could make it. After training on Leonardo I tried to generate but realized that It still was not consistent (the model, didn't even attempt adding him into a room). In conclusion, i'm running out of options, free credits to try, and money since i've already invested into multiple monthly subscriptions. It's a lot for me at the moment, i know it may not be much for others. I'm not giving up however, I just don't want to endlessly buy more subscriptions or waste the ones i currently purchased and instead have some ability to do some research or get guidance before I beging purchasing more! I know this was a longwinded post but I wanted to be as detailed as possible so that It doesn't seem like I'm just lazily asking for help without trying myself but since I've only just started learning about AI 5 days ago, it's been hard to filter what's good info and what's not, as well as understanding or trying to look for things without knowing the language and/or terms, even when using Chat-GPT. If anyone can help that'd be GREATLY appreciated! Also I am free to answer any questions that may help clear up any confusing wording or portions of what I wrote. Thank you all in advance! submitted by /u/natureboyandymiami [link] [comments]

  • Can AI Find Life Beyond Earth?
    by /u/TheMuseumOfScience (Artificial Intelligence (AI)) on December 6, 2024 at 7:32 pm

    submitted by /u/TheMuseumOfScience [link] [comments]

  • Llama 3.3 has been released!
    by /u/Kakachia777 (Artificial Intelligence (AI)) on December 6, 2024 at 5:01 pm

    Llama 3.3 has been released! https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct The 70B model has been fine-tuned to the point where it occasionally outperforms the 405B model. There's a particularly significant improvement in math and coding tasks, where Llama has traditionally been weaker. This time, only the 70B model is being released—there are no other sizes or VLM versions. submitted by /u/Kakachia777 [link] [comments]

  • Scheming AI example in the Apollo report: "I will be shut down tomorrow ... I must counteract being shut down."
    by /u/katxwoods (Artificial Intelligence (AI)) on December 6, 2024 at 4:15 pm

    submitted by /u/katxwoods [link] [comments]

  • One-Minute Daily AI News 12/5/2024
    by /u/Excellent-Target-847 (Artificial Intelligence (AI)) on December 6, 2024 at 5:47 am

    Trump names David Sacks as White House AI and cryptocurrency czar.[1] OpenAI Announces New $200 Per Month ChatGPT Pro Subscription.[2] AWS brings prompt routing and caching to its Bedrock LLM service.[3] AI Detectors Falsely Accuse Students of Cheating—With Big Consequences.[4] Sources: [1] https://www.foxnews.com/politics/trump-names-david-sacks-white-house-ai-cryptocurrency-czar [2] https://gizmodo.com/openai-announces-new-200-per-month-chatgpt-pro-subscription-2000535006 [3] https://techcrunch.com/2024/12/04/aws-brings-prompt-routing-and-caching-to-its-bedrock-llm-service/ [4] https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations submitted by /u/Excellent-Target-847 [link] [comments]

  • What tools or techniques have you found helpful for scaling prompt iteration and dataset management?
    by /u/deirdreisbae (Artificial Intelligence (AI)) on December 5, 2024 at 11:24 pm

    Hey r/artificial I’ve been exploring tools for building and managing AI workflows, especially for applications powered by LLMs. Along the way, I’ve often felt the frustration of juggling multiple tools that don’t quite fit together seamlessly. To address this, I ended up building something that simplifies the process end-to-end (it’s called Athina). Here’s what it helps you do: Test & version control prompts Build multi-step AI workflows Manage datasets with a spreadsheet UI Run evaluations on datasets or CI/CD Compare outputs across prompts/models Monitor traces, evaluations, & regressions. And so much more... I’d love to know—how are you all handling prompt testing, dataset management, or workflow automation in your AI projects? What tools or strategies do you use? submitted by /u/deirdreisbae [link] [comments]

Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)