Download the Ace AWS DEA-C01 Exam App: iOS - Android
A Daily Chronicle of AI Innovations in February 2024.
Welcome to the Daily Chronicle of AI Innovations in February 2024! This month-long blog series will provide you with the latest developments, trends, and breakthroughs in the field of artificial intelligence. From major industry conferences like ‘AI Innovations at Work’ to bold predictions about the future of AI, we will curate and share daily updates to keep you informed about the rapidly evolving world of AI. Join us on this exciting journey as we explore the cutting-edge advancements and potential impact of AI throughout February 2024.
Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Master GPT-4, Gemini, Generative AI & LLMs – Simplified Guide for Everyday Users: Demystifying Artificial Intelligence – OpenAI, ChatGPT, Google Bard, AI ML Quiz, AI Certifications Prep, Prompt Engineering,” available at Etsy, Shopify, Apple, Google, or Amazon.
A Daily Chronicle of AI Innovations in February 2024 – Day 29: AI Daily News – February 29th, 2024
Alibaba’s EMO makes photos come alive (and lip-sync!) Microsoft introduces 1-bit LLM Ideogram launches text-to-image model version 1.0
Adobe launches new GenAI music tool
Morph makes filmmaking easier with Stability AI
Hugging Face, Nvidia, and ServiceNow release StarCode 2 for code generation.
Meta set to launch Llama 3 in July and could be twice the size
Apple subtly reveals its AI plans
OpenAI to put AI into humanoid robots
GitHub besieged by millions of malicious repositories in ongoing attack
Nvidia just released a new code generator that can run on most modern CPUs
Three more publishers sue OpenAI
Alibaba’s EMO makes photos come alive (and lip-sync!)
Researchers at Alibaba have introduced an AI system called “EMO” (Emote Portrait Alive) that can generate realistic videos of you talking and singing from a single photo and an audio clip. It captures subtle facial nuances without relying on 3D models.
EMO uses a two-stage deep learning approach with audio encoding, facial imagery generation via diffusion models, and reference/audio attention mechanisms.
Experiments show that the system significantly outperforms existing methods in terms of video quality and expressiveness.
By combining EMO with OpenAI’s Sora, we could synthesize personalized video content from photos or bring photos from any era to life. This could profoundly expand human expression. We may soon see automated TikTok-like videos.
Microsoft has launched a radically efficient AI language model dubbed 1-bit LLM. It uses only 1.58 bits per parameter instead of the typical 16, yet performs on par with traditional models of equal size for understanding and generating text.
Building on research like BitNet, this drastic bit reduction per parameter boosts cost-effectiveness relating to latency, memory, throughput, and energy usage by 10x. Despite using a fraction of the data, 1-bit LLM maintains accuracy.
Why does this matter?
Traditional LLMs often require extensive resources and are expensive to run while their swelling size and power consumption give them massive carbon footprints.
This new 1-bit technique points towards much greener AI models that retain high performance without overusing resources. By enabling specialized hardware and optimized model design, it can drastically improve efficiency and cut computing costs, with the ability to put high-performing AI directly into consumer devices.
Ideogram has launched a new text-to-picture app called Ideogram 1.0. It’s their most advanced ever. Dubbed a “creative helper,” it generates highly realistic images from text prompts with minimal errors. A built-in “Magic Prompt” feature effortlessly expands basic prompts into detailed scenes.
Ideogram 1.0 significantly cuts image generation errors in half compared to other apps. And users can make custom picture sizes and styles. So it can do memes, logos, old-timey portraits, anything.
Magic Prompt takes basic prompts like “vegetables orbiting the sun” and turns them into full scenes with backstories. That would take regular people hours to write out word-for-word.
Tests show that Ideogram 1.0 beats DALL-E 3 and Midjourney V6 at matching prompts, making sensible pictures, looking realistic, and handling text.
This advancement in AI image generation hints at a future where generative models commonly assist or even substitute human creators across personalized gift items, digital content, art, and more.
What Else Is Happening in AI on February 29th, 2024
Adobe launches new GenAI music tool
Adobe introduces Project Music GenAI Control, allowing users to create music from text or reference melodies with customizable tempo, intensity, and structure. While still in development, this tool has the potential to democratize music creation for everyone. (Link)
Morph makes filmmaking easier with Stability AI
Morph Studio, a new AI platform, lets you create films simply by describing desired scenes in text prompts. It also enables combining these AI-generated clips into complete movies. Powered by Stability AI, this revolutionary tool could enable anyone to become a filmmaker. (Link)
Hugging Face, Nvidia, and ServiceNow release StarCode 2 for code generation.
Hugging Face along with Nvidia and Service Now launches StarCoder 2, an open-source code generator available in three GPU-optimized models. With improved performance and less restrictive licensing, it promises efficient code completion and summarization. (Link)
Meta plans to launch Llama 3 in July to compete with OpenAI’s GPT-4. It promises increased responsiveness, better context handling, and double the size of its predecessor. With added tonality and security training, Llama 3 seeks more nuanced responses. (Link)
Apple subtly reveals its AI plans
Apple CEO Tim Cook reveals plans to disclose Apple’s generative AI efforts soon, highlighting opportunities to transform user productivity and problem-solving. This likely indicates exciting new iPhone and device features centered on efficiency. (Link)
A Daily Chronicle of AI Innovations in February 2024 – Day 28: AI Daily News – February 28th, 2024
NVIDIA’s Nemotron-4 beats 4x larger multilingual AI models GitHub launches Copilot Enterprise for customized AI coding Slack study shows AI frees up 41% of time spent on low-value work
Pika launches new lip sync feature for AI videos
Google pays publishers to test an unreleased GenAI tool
Intel and Microsoft team up to bring 100M AI PCs by 2025
Writer’s Palmyra-Vision summarizes charts, scribbles into text
Apple cancels its decade-long electric car project
OpenAI claims New York Times paid someone to ‘hack’ ChatGPT
Tumblr and WordPress blogs will be exploited for AI model training
Google CEO slams ‘completely unacceptable’ Gemini AI errors
Klarna’s AI bot is doing the work of 700 employees
NVIDIA’s Nemotron-4 beats 4x larger multilingual AI models
Unlock the power of AI with “Read Aloud For Me – AI Dashboard” – your ultimate AI Dashboard and Hub. Access all major AI tools in one seamless app, designed to elevate your productivity and streamline your digital experience. Available now on the web at readaloudforme.com and across all your favorite app stores: Apple, Google, and Microsoft. “Read Aloud For Me – AI Dashboard” brings the future of AI directly to your fingertips, merging convenience with innovation. Whether for work, education, or personal enhancement, our app is your gateway to the most advanced AI technologies. Download today and transform the way you interact with AI tools.
Nvidia has announced Nemotron-4 15B, a 15-billion parameter multilingual language model trained on 8 trillion text tokens. Nemotron-4 shows exceptional performance in English, coding, and multilingual datasets. It outperforms all other open models of similar size on 4 out of 7 benchmarks. It has the best multilingual capabilities among comparable models, even better than larger multilingual models.
The researchers highlight how Nemotron-4 scales model training data in line with parameters instead of just increasing model size. As a result, inferences are computed faster, and latency is reduced. Due to its ability to fit on a single GPU, Nemotron-4 aims to be the best general-purpose model given practical constraints. It achieves better accuracy than the 34-billion parameter LLaMA model for all tasks and remains competitive with state-of-the-art models like QWEN 14B.
Why does this matter?
Just as past computing innovations improved technology access, Nemotron’s lean GPU deployment profile can expand multilingual NLP adoption. Since Nemotron fits on a single cloud graphics card, it dramatically reduces costs for document, query, and application NLP compared to alternatives requiring supercomputers. These models can help every company become fluent with customers and operations across countless languages.
GitHub launches Copilot Enterprise for customized AI coding
GitHub has launched Copilot Enterprise, an AI assistant for developers at large companies. The tool provides customized code suggestions and other programming support based on an organization’s codebase and best practices. Experts say Copilot Enterprise signals a significant shift in software engineering, with AI essentially working alongside each developer.
Copilot Enterprise integrates across the coding workflow to boost productivity. Early testing by partners like Accenture found major efficiency gains, with a 50% increase in builds from autocomplete alone. However, GitHub acknowledges skepticism around AI originality and bugs. The company plans substantial investments in responsible AI development, noting that Copilot is designed to augment human developers rather than replace them.
Why does this matter?
The entire software team could soon have an AI partner for programming. However, concerns about responsible AI development persist. Enterprises must balance rapidly integrating tools like Copilot with investments in accountability. How leadership approaches AI strategy now will separate future winners from stragglers.
Slack study shows AI frees up 41% of time spent on low-value work
Slack’s latest workforce survey shows a surge in the adoption of AI tools among desk workers. There has been a 24% increase in usage over the past quarter, and 80% of users are already seeing productivity gains. However, less than half of companies have guidelines around AI adoption, which may inhibit experimentation. The research also spotlights an opportunity to use AI to automate the 41% of workers’ time spent on repetitive, low-value tasks. And focus efforts on meaningful, strategic work.
While most executives feel urgency to implement AI, top concerns include data privacy and AI accuracy. According to the findings, guidance is necessary to boost employee adoption. Workers are over 5x more likely to have tried AI tools at companies with defined policies.
Why does this matter?
This survey signals AI adoption is already boosting productivity when thoughtfully implemented. It can free up significant time spent on repetitive tasks and allows employees to refocus on higher-impact work. However, to realize AI’s benefits, organizations must establish guidelines and address data privacy and reliability concerns. Structured experimentation with intuitive AI systems can increase productivity and data-driven decision-making.
OpenAI is collaborating with robotics startup Figure to integrate its AI technology into humanoid robots, marking the AI’s debut in the physical world.
The partnership aims to develop humanoid robots for commercial use, with significant funding from high-profile investors including Jeff Bezos, Microsoft, Nvidia, and Amazon.
The initiative will leverage OpenAI’s advanced AI models, such as GPT and DALL-E, to enhance the capabilities of Figure’s robots, aiming to address human labor shortages.
GitHub besieged by millions of malicious repositories in ongoing attack
Hackers have automated the creation of malicious GitHub repositories by cloning popular repositories, infecting them with malware, and forking them thousands of times, resulting in hundreds of thousands of malicious repositories designed to steal information.
The malware, hidden behind seven layers of obfuscation, includes a modified version of BlackCap-Grabber, which steals authentication cookies and login credentials from various apps.
While GitHub uses artificial intelligence to block most cloned malicious packages, 1% evade detection, leading to thousands of malicious repositories remaining on the platform.
Nvidia just released a new code generator that can run on most modern CPUs
Nvidia, ServiceNow, and Hugging Face have released StarCoder2, a series of open-access large language models for code generation, emphasizing efficiency, transparency, and cost-effectiveness.
StarCoder2, trained on 619 programming languages, comes in three sizes: 3 billion, 7 billion, and 15 billion parameters, with the smallest model matching the performance of its predecessor’s largest.
The platform highlights advancements in AI ethics and efficiency, utilizing a new code dataset for enhanced understanding of diverse programming languages and ensuring adherence to ethical AI practices by allowing developers to opt out of data usage.
Three more publishers sue OpenAI
The Intercept, Raw Story, and AlterNet have filed lawsuits against OpenAI and Microsoft in the Southern District of New York, alleging copyright infringement through the training of AI models without proper attribution.
The litigation claims that ChatGPT reproduces journalism works verbatim or nearly verbatim without providing necessary copyright information, suggesting that if trained properly, it could have included these details in its outputs.
The suits argue that OpenAI and Microsoft knowingly risked copyright infringement for profit, evidenced by their provision of legal cover to customers and the existence of an opt-out system for web content crawling.
What Else Is Happening in AI on February 28th, 2024
Pika launches new lip sync feature for AI videos
Video startup Pika announced a new Lip Sync feature powered by ElevenLabs. Pro users can add realistic dialogue with animated mouths to AI-generated videos. Although currently limited, Pika’s capabilities offer customization of the speech style, text, or uploaded audio tracks, escalating competitiveness in the AI synthetic media space. (Link)
Google pays publishers to test an unreleased GenAI tool
Google is privately paying a group of publishers to test a GenAI tool. They need to summarize three articles daily based on indexed external sources in exchange for a five-figure annual fee. Google says this will help under-resourced news outlets, but experts say it could negatively affect original publishers and undermine Google’s news initiative. (Link)
Intel and Microsoft team up to bring 100M AI PCs by 2025
By collaborating with Microsoft, Intel aims to supply 100 million AI-powered PCs by 2025 and ramp up enterprise demand for efficiency gains. Despite Apple and Qualcomm’s push for Arm-based designs, Intel hopes to maintain its 76% laptop chip market share following post-COVID inventory corrections. (Link)
Writer’s Palmyra-Vision summarizes charts, scribbles into text
AI writing startup Writer announced a new capability of its Palmyra model called Palmyra-Vision. This model can generate text summaries from images, including charts, graphs, and handwritten notes. It can automate e-commerce merchandise descriptions, graph analysis, and compliance checking while recommending human-in-the-loop for accuracy. (Link)
Apple cancels its decade-long electric car project
Apple is canceling its decade-long electric vehicle project after spending over $10 billion. There were nearly 2,000 employees working on the effort known internally as Titan. After Apple announces the cancellation of its ambitious electric car project, some staff from the discontinued car team will shift to other teams such as Gen AI. (Link)
Nvidia’s New AI Laptops
Nvidia, the dominant force in graphics processing units (GPUs), has once again pushed the boundaries of portable computing. Their latest announcement showcases a new generation of laptops powered by the cutting-edge RTX 500 and 1000 Ada Generation GPUs. The focus here isn’t just on better gaming visuals – these laptops promise to transform the way we interact with artificial intelligence (AI) on the go.
Nvidia’s new laptop GPUs are purpose-built to accelerate AI workflows. Let’s break down the key components:
Specialized AI Hardware: The RTX 500 and 1000 GPUs feature dedicated Tensor Cores. These cores are the heart of AI processing, designed to handle complex mathematical operations involved in machine learning and deep learning at incredible speed.
Generative AI Powerhouse: These new GPUs bring a massive boost for generative AI applications like Stable Diffusion. This means those interested in creating realistic images from simple text descriptions can expect to see significant performance improvements.
Efficiency Meets Power: These laptops aren’t just about raw power. They’re designed to intelligently offload lighter AI tasks to a dedicated Neural Processing Unit (NPU) built into the CPU, conserving GPU resources for the most demanding jobs.
What does this mean?
These advancements translate into a wide range of ground-breaking possibilities:
Photorealistic Graphics Enhanced by AI: Gamers can immerse themselves in more realistic and visually stunning worlds thanks to AI-powered technologies enhancing graphics rendering.
AI-Supercharged Productivity: From generating social media blurbs to advanced photo and video editing, professionals can complete creative tasks far more efficiently with AI assistance.
Real-time AI Collaboration: Features like AI-powered noise cancellation and background manipulation in video calls will elevate your virtual communication to a whole new level.
Why should I care?
Nvidia’s latest AI-focused laptops have the potential to revolutionize the way we use our computers:
Portable Creativity: Whether you’re an artist, designer, or just someone who loves to experiment with AI art tools, these laptops promise a level of on-the-go creative freedom previously unimaginable.
Workplace Transformation: Industries from architecture to healthcare will see AI optimize processes and enhance productivity. These laptops put that power directly into the hands of professionals.
The Future is AI: AI is advancing at a blistering pace, and Nvidia is ensuring that we won’t be tied to our desks to experience it.
In short, Nvidia’s new generation of AI laptops heralds an era where high-performance, AI-driven computing becomes accessible to more people. This has the potential to spark a wave of innovation that we can’t even fully comprehend yet.
A Daily Chronicle of AI Innovations in February 2024 – Day 27: AI Daily News – February 27th, 2024
Tesla’s robot is getting quicker, better
Nvidia CEO: kids shouldn’t learn to code — they should leave it up to AI
Microsoft’s deal with Mistral AI faces EU scrutiny
Apple Vision Pro’s components cost $1,542—but that’s not the full story
PlayStation to axe 900 jobs and close studio
NVIDIA’s CEO Thinks That Our Kids Shouldn’t Learn How to Code As AI Can Do It for Them
During the latest World Government Summit in Dubai, Jensen Huang, the CEO of NVIDIA, spoke about the things our kids should and shouldn’t learn in the future. It may come as a surprise to many but Huang does think that our kids don’t need the knowledge of coding, just leave it to AI.
He mentioned that a decade ago, there was a belief that everyone needed to learn to code, and they were probably right, but based on what we see nowadays, the situation has changed due to achievements in AI, where everyone is literally a programmer.
He further talked about how kids may not necessarily need to learn how to code, and the focus should be on developing technology that allows for programming languages to be more human-like. In essence, traditional coding languages such as C++ or Java may become obsolete, as computers should be able to comprehend human language inputs.
Mistral Large: The new rival to GPT-4, 2nd best LLM of all time
The French AI startup Mistral has launched its largest-ever LLM and flagship model to date, Mistral Large, with a 32K context window. The model has top-tier reasoning capabilities, and you can use it for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.
Due to a strong multitasking capability, Mistral Large is the world’s second-ranked model on MMLU (Massive multitask language understanding).
The model is natively fluent in English, French, Spanish, German, and Italian, with a nuanced understanding of grammar and cultural context. In addition to that, Mistral also shows top performance in coding and math tasks.
Mistral Large is now available via the in-house platform “La Plateforme” and Microsoft’s Azure AI via API.
Why does it matter?
Mistral Large stands out as the first model to truly challenge OpenAI’s dominance since GPT-4. It shows skills on par with GPT-4 for complex language tasks while costing 20% less. In this race to make their models better, it’s the user community that stands to gain the most. Also, the focus on European languages and cultures could make Mistral a leader in the European AI market.
DeepMind’s new gen-AI model creates video games in a flash
Google DeepMind has launched a new generative AI model – Genie (Generative Interactive Environment), that can create playable video games from a simple prompt after learning game mechanics from hundreds of thousands of gameplay videos.
Developed by the collaborative efforts of Google and the University of British Columbia, Genie can create side-scrolling 2D platformer games based on user prompts, like Super Mario Brothers and Contra, using a single image.
Trained on over 200,000 hours of gameplay videos, the experimental model can turn any image or idea into a 2D platformer.
Genie can be prompted with images it has never seen before, such as real-world photographs or sketches, enabling people to interact with their imagined virtual worlds-–essentially acting as a foundation world model. This is possible despite training without any action labels.
Why does it matter?
Genie creates a watershed moment in the generative AI space, becoming the first LLM to develop interactive, playable environments from a single image prompt. The model could be a promising step towards general world models for AGI (Artificial General Intelligence) that can understand and apply learned knowledge like a human. Lastly, Genie can learn fine-grained controls exclusively from Internet videos, a unique feature as Internet videos do not typically have labels.
Meta has released a research paper that addresses the need for efficient large language models that can run on mobile devices. The focus is on designing high-quality models with under 1 billion parameters, as this is feasible for deployment on mobiles.
By using deep and thin architectures, embedding sharing, and grouped-query attention, they developed a strong baseline model called MobileLLM, which achieves 2.7%/4.3% higher accuracy compared to previous 125M/350M state-of-the-art models. The research paper highlights that you should concentrate on developing an efficient model architecture rather than on data and parameter quantity to determine model quality.
Why does it matter?
With language understanding now possible on consumer devices, mobile developers can create products that were once hard to build because of latency or privacy issues when reliant on cloud connections. This advancement allows industries like finance, gaming, and personal health to integrate conversational interfaces, intelligent recommendations, and real-time data privacy protections using models optimized for mobile efficiency, sparking creativity in a new wave of intelligent apps.
What Else Is Happening in AI on February 27th, 2024
Qualcomm reveals 75+ pre-optimized AI models at MWC 2024
Qualcomm released 75+ new large language models, including popular generative models like Whisper and Stable Diffusion, optimized for the Snapdragon platform at the Mobile World Congress (MWC) 2024. The company stated that some of these LLMs will have generation AI capabilities for next-generation smartphones, PCs, IoT, XR devices, etc. (Link)
Nvidia launches new laptop GPUs for AI on the go
Nvidia launched RTX 500 and 1000 Ada Generation laptop graphics processing units (GPUs) at the MWC 2024 for on-the-go AI processing. These GPUs will utilize the Ada Lovelace architecture to provide content creators, researchers, and engineers with accelerated AI and next-generation graphic performance while working from portable devices. (Link)
Microsoft announces AI principles for boosting innovation and competition
Microsoft announced a set of principles to foster innovation and competition in the AI space. The move came to showcase its role as a market leader in promoting responsible AI and answer the concerns of rivals and antitrust regulators. The standard covers six key dimensions of responsible AI: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. (Link)
Google brings Gemini in Google Messages, Android Auto, Wear OS, etc.
Despite receiving some flakes from the industry, Google is riding the AI wave and decided to integrate Gemini into a new set of features for phones, cars, and wearables. With these new features, users can use Gemini to craft messages and AI-generated captions for images, summarize texts through AI for Android Auto, and access passes on Wear OS. (Link)
Microsoft Copilot GPTs help you plan your vacation and find recipes.
Microsoft has released a few copilot GPTs that can help you plan your next vacation, find recipes, learn how to cook them, create a custom workout plan, or design a logo for your brand. Microsoft corporate vice president Jordi Ribas informed the media that users will soon be able to create customized Copilot GPTs, which is missing in the current version of Copilot. (Link)
Tesla’s robot is getting quicker, better
Elon Musk shared new footage showing improved mobility and speed of Tesla’s robot, Optimus Gen 2, which is moving more smoothly and steadily around a warehouse.
The latest version of the Optimus robot is lighter, has increased walking speed thanks to Tesla-designed actuators and sensors, and demonstrates significant progress over previous models.
Musk predicts the possibility of Optimus starting to ship in 2025 for less than $20,000, marking a significant milestone in Tesla’s venture into humanoid robotics capable of performing mundane or dangerous tasks for humans.
” We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future. “
I asked GPT4 to read through the article and summarize ELI5 style bullet points:
Who Wrote This?
A group of smart people at Google DeepMind wrote the article. They’re working on making things better for turning text into webpages.
What Did They Do?
They created something called “Genie.” It’s like a magic tool that can take all sorts of ideas or pictures and turn them into a place you can explore on a computer, like making your own little video game world from a drawing or photo. They did this by watching lots and lots of videos from the internet and learning how things move and work in those videos.
How Does It Work?
They use something called “Genie” which is very smart and can understand and create new videos or game worlds by itself. You can even tell it what to do next in the world it creates, like moving forward or jumping, and it will show you what happens.
Why Is It Cool?
Because Genie can create new, fun worlds just from a picture or some words, and you can play in these worlds! It’s like having a magic wand to make up your own stories and see them come to life on a computer.
What’s Next?
Even though Genie is really cool, it’s not perfect. Sometimes it makes mistakes or can’t remember things for very long. But the people who made it are working to make it better, so one day, everyone might be able to create their own video game worlds just by imagining them.
Important Points:
They want to make sure this tool is used in good ways and that it’s safe for everyone. They’re not sharing it with everyone just yet because they want to make sure it’s really ready and won’t cause any problems.
Microsoft eases AI testing with new red teaming tool
Microsoft has released an open-source automation called PyRIT to help security researchers test for risks in generative AI systems before public launch. Historically, “red teaming” AI has been an expert-driven manual process requiring security teams to create edge case inputs and assess whether the system’s responses contain security, fairness, or accuracy issues. PyRIT aims to automate parts of this tedious process for scale.
PyRIT helps researchers test AI systems by inputting large datasets of prompts across different risk categories. It automatically interacts with these systems, scoring each response to quantify failures. This allows for efficient testing of thousands of input variations that could cause harm. Security teams can then take this evidence to improve the systems before release.
Why does this matter?
Microsoft’s release of the PyRIT toolkit makes rigorously testing AI systems for risks drastically more scalable. Automating parts of the red teaming process will enable much wider scrutiny for generative models and eventually raise their performance standards. PyRIT’s automation will also pressure the entire industry to step up evaluations if they want their AI trusted.
Transformers learn to plan better with Searchformer
A new paper from Meta introduces Searchformer, a Transformer model that exceeds the performance of traditional algorithms like A* search in complex planning tasks such as maze navigation and Sokoban puzzles. Searchformer is trained in two phases: first imitating A* search to learn general planning skills, then fine-tuning the model via expert iteration to find optimal solutions more efficiently.
The key innovation is the use of search-augmented training data that provides Searchformer with both the execution trace and final solution for each planning task. This enables more data-efficient learning compared to models that only see solutions. However, encoding the full reasoning trace substantially increases the length of training sequences. Still, Searchformer shows promising techniques for training AI to surpass symbolic planning algorithms.
Why does this matter?
Achieving state-of-the-art planning results shows that generative AI systems are advancing to develop human-like reasoning abilities. Mastering complex cognitive tasks like finding optimal paths has huge potential in AI applications that depend on strategic thinking and foresight. As other companies race to close this new gap in planning capabilities, progress in core areas like robotics and autonomy is likely to accelerate.
YOLOv9 sets a new standard for real-time object recognition
YOLO (You Only Look Once) is open-source software that enables real-time object recognition in images, allowing machines to “see” like humans. Researchers have launched YOLOv9, the latest iteration that achieves state-of-the-art accuracy with significantly less computational cost.
By introducing two new techniques, Programmable Gradient Information (PGI) and Generalized Efficient Layer Aggregation Network (GELAN), YOLOv9 reduces parameters by 49% and computations by 43% versus predecessor YOLOv8, while boosting accuracy on key benchmarks by 0.6%. PGI improves network updating for more precise object recognition, while GELAN optimizes the architecture to increase accuracy and speed.
Why does this matter?
The advanced responsiveness of YOLOv9 unlocks possibilities for mobile vision applications where computing resources are limited, like drones or smart glasses. More broadly, it highlights deep learning’s potential to match human-level visual processing speeds, encouraging technology advancements like self-driving vehicles.
What Else Is Happening in AI on February 26th, 2024
Apple tests internal ChatGPT-like tool for customer support
Apple recently launched a pilot program testing an internal AI tool named “Ask.” It allows AppleCare agents to generate technical support answers automatically by querying Apple’s knowledge base. The goal is faster and more efficient customer service. (Link)
ChatGPT gets an Android home screen widget
Android users can now access ChatGPT more easily through a home screen widget that provides quick access to the chatbot’s conversation and query modes. The widget is available in the latest beta version of the ChatGPT mobile app. (Link)
AWS adds open-source Mistral AI models to Amazon Bedrock
AWS announced it will be bringing two of Mistral’s high-performing generative AI models, Mistral 7B and Mixtral 8x7B, to its Amazon Bedrock platform for gen AI offerings in the near future. AWS chose Mistral’s cost-efficient and customizable models to expand the range of GenAI abilities for Bedrock users. (Link)
Montreal tests AI system to prevent subway suicides
The Montreal Transit Authority is testing an AI system that analyzes surveillance footage to detect warning signs of suicide risk among passengers. The system, developed with a local suicide prevention center, can alert staff to intervene and save lives. With current accuracy of 25%, the “promising” pilot could be implemented in two years. (Link)
Fast food giants embrace controversial AI worker tracking
Riley, an AI system by Hoptix, monitors worker-customer interactions in 100+ fast-food franchises to incentivize upselling. It tracks metrics like service speed, food waste, and upselling rates. Despite being a coaching tool, concerns exist regarding the imposition of unfair expectations on workers. (Link)
Mistral AI releases new model to rival GPT-4
Mistral AI introduces “Mistral Large,” a large language model designed to compete with top models like GPT-4 and Claude 2, and “Le Chat,” a beta chat assistant, aiming to establish an alternative to OpenAI and Anthropic’s offerings.
With aggressive pricing at $8 per million input tokens and $24 per million output tokens, Mistral Large offers a cost-effective solution compared to GPT-4’s pricing, supporting English, French, Spanish, German, and Italian.
The startup also revealed a strategic partnership with Microsoft to offer Mistral models on the Azure platform, enhancing Mistral AI’s market presence and potentially increasing its customer base through this new distribution channel.
Gemini is about to slide into your DMs
Google’s AI chatbot Gemini is being integrated into the Messages app as part of an Android update, aiming to make conversations more engaging and friend-like, initially available in English in select markets.
Android Auto receives AI improvements for summarizing long texts or chat threads and suggesting context-based replies, enhancing safety and convenience for drivers.
Google also introduces AI-powered accessibility features in Lookout and Maps, including screen reader enhancements and automatic generation of descriptions for images, to assist visually impaired users globally.
Microsoft tried to sell Bing to Apple in 2018
Microsoft attempted to sell its Bing search engine to Apple in 2018, aiming to make Bing the default search engine for Safari, but Apple declined due to concerns over Bing’s search quality.
The discussions between Apple and Microsoft were highlighted in Google’s court filings as evidence of competition in the search industry, amidst accusations against Google for monopolizing the web search sector.
Despite Microsoft’s nearly $100 billion investment in Bing over two decades, the search engine only secures a 3% global market share, while Google continues to maintain a dominant position, paying billions to Apple to remain the default search engine on its devices.
Meta forms team to stop AI from tricking voters
Meta is forming a dedicated task force to counter disinformation and harmful AI content ahead of the EU elections, focusing on rapid threat identification and mitigation.
The task force will remove harmful content from Facebook, Instagram, and Threads, expand its fact-checking team, and introduce measures for users and advertisers to disclose AI-generated material.
The initiative aligns with the Digital Services Act’s requirements for large online platforms to combat election manipulation, amidst growing concerns over the disruptive potential of AI and deepfakes in elections worldwide.
Samsung unveils the Galaxy Ring as way to ‘simplify everyday wellness’
Samsung teased the new Galaxy Ring at Galaxy Unpacked, showcasing its ambition to introduce a wearable that is part of a future vision for ambient sensing.
The Galaxy Ring, coming in three colors and various sizes, will feature sleep, activity, and health tracking capabilities, aiming to compete with products like the Oura Ring.
Samsung plans to integrate the Galaxy Ring into a larger ecosystem, offering features like My Vitality Score and Booster Cards in the Galaxy Health app, to provide a more holistic health monitoring system.
Impact of AI on Freelance Jobs
AI Weekly Rundown (February 19 to February 26)
Major AI announcements from NVIDIA, Apple, Google, Adobe, Meta, and more.
NVIDIA presents OpenMathInstruct-1, a 1.8 million math instruction tuning dataset – OpenMathInstruct-1 is a high-quality, synthetically generated dataset. It is 4x bigger than previous datasets and does not use GPT-4. The best model, OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves which is competitive performance with the best gpt-distilled models.
Apple is reportedly working on AI updates to Spotlight and Xcode – AI features for Spotlight search could let iOS and macOS users make natural language requests to get weather reports or operate features deep within apps. Apple also expanded internal testing of new generative AI features for its Xcode and plans to release them to third-party developers this year.
Microsoft arms white hat AI hackers with a new red teaming tool – PyRIT, an open-source tool from Microsoft, automates the testing of generative AI systems for risks before their public launch. It streamlines the “red teaming” process, traditionally a manual task, by inputting large datasets of prompts and scoring responses to identify potential issues in security, fairness, or accuracy.
Google has open-sourced Magika, its AI-powered file-type identification system – It helps accurately detect binary and textual file types. Under the hood, Magika employs a custom, highly optimized deep-learning model, enabling precise file identification within milliseconds, even when running on a CPU.
Groq’s new AI chip turbocharges LLMs, outperforms ChatGPT – Groq, an AI chip startup, has developed a special AI hardware– the first-ever Language Processing Unit (LPU) that turbocharges LLMs and processes up to 500 tokens/second, which is far more superior than ChatGPT-3.5’s 40 tokens/second.
Transformers learn to plan better with Searchformer – Meta’s Searchformer, a Transformer model, outperforms traditional algorithms like A* search in complex planning tasks. It’s trained to imitate A* search for general planning skills and then fine-tuned for optimal solutions using expert iteration and search-augmented training data.
Apple tests internal chatGPT-like tool for customer support – Apple recently launched a pilot program testing an internal AI tool named “Ask.” It allows AppleCare agents to automatically generate technical support answers by querying Apple’s knowledge base. The goal is faster and more efficient customer service.
BABILong: The new benchmark to assess LLMs for long docs – The paper uncovers limitations in GPT-4 and RAG, showing reliance on the initial 25% of input. BABILong evaluates GPT-4, RAG, and RMT, revealing that conventional methods are effective for 10^4 elements, while recurrent memory augmentation handles 10^7 elements, thereby setting a new advancement for long doc understanding.
Stanford’s AI model identifies sex from brain scans with 90% accuracy – Stanford medical researchers have developed an AI model that can identify the sex of individuals from brain scans with 90% accuracy. The model focuses on dynamic MRI scans, identifying specific brain networks to distinguish males and females.
Adobe’s new AI assistant manages documents for you – Adobe introduced an AI assistant for easier document navigation, answering questions, and summarizing information. It locates key data, generates citations, and formats brief overviews for presentations and emails to save time. Moreover, Adobe introduced CAVA, a new 50-person AI research team focused on inventing new models and processes for AI video creation.
Meta released Aria recordings to fuel smart speech recognition – The Meta team released a multimodal dataset of two-sided conversations captured by Aria smart glasses. It contains audio, video, motion, and other sensor data. The diverse signals aim to advance speech recognition and translation research for augmented reality interfaces.
AWS adds open-source Mistral AI models to Amazon Bedrock – AWS announced it will be bringing two of Mistral’s high-performing generative AI models, Mistral 7B and Mixtral 8x7B, to its Amazon Bedrock platform for GenAI offerings in the near future. AWS chose Mistral’s cost-efficient and customizable models to expand the range of GenAI abilities for Bedrock users.
Penn’s AI chip runs on light, not electricity – Penn engineers developed a new photonic chip that performs complex math for AI. It reduces processing time and energy consumption using light waves instead of electricity. This design uses optical computing principles developed by Penn professor Nader Engheta and nanoscale silicon photonics to train and infer neural networks.
Google launches its first open-source LLM – Google has open-sourced Gemma, a lightweight yet powerful new family of language models that outperforms larger models on NLP benchmarks but can run on personal devices. The release also includes a Responsible Generative AI Toolkit to assist developers in safely building applications with Gemma, now accessible through Google Cloud, Kaggle, Colab and other platforms.
AnyGPT is a major step towards artificial general intelligence – Researchers in Shanghai have developed AnyGPT, a groundbreaking new AI model that can understand and generate data across virtually any modality like text, speech, images and music using a unified discrete representation. It achieves strong zero-shot performance comparable to specialized models, representing a major advance towards AGI.
Google launches Gemini for Workspace: Google has launched Gemini for Workspace, bringing Gemini’s capabilities into apps like Docs and Sheets to enhance productivity. The new offering comes in Business and Enterprise tiers and features AI-powered writing assistance, data analysis, and a chatbot to help accelerate workflows.
Stable Diffusion 3 – A multi-subject prompting text-to-image model – Stability AI’s Stable Diffusion 3 is generating excitement in the AI community due to its improved text-to-image capabilities, including better prompt adherence and image quality. The early demos have shown remarkable improvements in generation quality, surpassing competitors such as MidJourney, Dall-E 3, and Google ImageFX.
LongRoPE: Extending LLM context window beyond 2 million tokens – Microsoft’s LongRoPE extends large language models to 2048k tokens, overcoming challenges of high fine-tuning costs and scarcity of long texts. It shows promising results with minor modifications and optimizations.
Google Chrome introduces “Help me write” AI feature – Google’s “Help me write” is an experimental AI feature on its Chrome browser that offers writing suggestions for short-form content. It highlights important features mentioned on a product page and can be accessed by enabling Chrome’s Experimental AI setting.
Montreal tests AI system to prevent subway suicides – The Montreal transit authority is testing an AI system that analyzes surveillance footage to detect warning signs of suicide risk among passengers. The system, developed with a local suicide prevention center, can alert staff to intervene and save lives. With current accuracy of 25%, the “promising” pilot could be implemented in two years.
Fast food giants embrace controversial AI worker tracking – Riley, an AI system by Hoptix, monitors worker-customer interactions in 100+ fast food franchises to incentivize upselling. It tracks metrics like service speed, food waste, and upselling rates. Despite being a coaching tool, concerns exist regarding the imposition of unfair expectations on workers. And there was more… – SoftBank’s founder is seeking about $100 billion for an AI chip venture – ElevenLabs teases a new AI sound effects feature – NBA commissioner Adam Silver demonstrates NB-AI concept – Reddit signs AI content licensing deal ahead of IPO – ChatGPT gets an Android homescreen widget – YOLOv9 sets a new standard for real-time object recognition – Mistral quietly released a new model in testing called ‘next’ – Microsoft to invest $2.1 billion for AI infrastructure expansion in Spain – Graphcore explores sales talk with OpenAI, Softbank, and Arm – OpenAI’s Sora can craft impressive video collages – US FTC proposes a prohibition law on AI impersonation – Meizu bids farewell to the smartphone market; shifts focus on AI – Microsoft develops server network cards to replace NVIDIA’s cards – Wipro and IBM team up to accelerate enterprise AI – Deutsche Telekom revealed an AI-powered app-free phone concept – Tinder fights back against AI dating scams – Intel lands a $15 billion deal to make chips for Microsoft – DeepMind forms new unit to address AI dangers – Match Group bets on AI to help its workers improve dating apps – Google Play Store tests AI-powered app recommendations – Google cut a deal with Reddit for AI training data – GPT Store introduces linking profiles, ratings, and enhanced ‘About’ pages – Microsoft introduces a generative erase feature for AI-editing photos in Windows 11 – Suno AI V3 Alpha is redefining music generation – Jasper acquires image platform Clipdrop from Stability AI
A Daily Chronicle of AI Innovations in February 2024 – Day 24: AI Daily News – February 24th, 2024
Google’s chaotic AI strategy
Google’s AI strategy has resulted in confusion among consumers due to a rapid succession of new products, names, and features, compromising public trust in both AI and Google itself.
The company has launched a bewildering array of AI products with overlapping and inconsistent naming schemes, such as Bard transforming into Gemini, alongside multiple versions of Gemini, complicating user understanding and adoption.
Google’s rushed approach to competing with rivals like OpenAI has led to a chaotic rollout of AI offerings, leaving customers and even its own employees mocking the company’s inability to provide clear and accessible AI solutions.
Filmmaker puts $800 million studio expansion on hold because of OpenAI’s Sora
Tyler Perry paused a $800 million expansion of his Atlanta studio after being influenced by OpenAI’s video AI model Sora, expressing concerns over AI’s impact on the film industry and job losses.
Perry has started utilizing AI in film production to save time and costs, for example, in applying aging makeup, yet warns of the potential job displacement this technology may cause.
The use of AI in Hollywood has led to debates on its implications for jobs, with calls for regulation and fair compensation, highlighted by actions like strikes and protests by SAG-AFTRA members.
Google explains Gemini’s ‘embarrassing’ AI pictures
Google addressed the issue of Gemini AI producing historically inaccurate images, such as racially diverse Nazis, attributing the error to tuning issues within the model.
The problem arose from the AI’s overcompensation in its attempt to show diversity, leading to inappropriate image generation and an overly cautious approach to generating images of specific ethnicities.
Google has paused the image generation feature in Gemini since February 22, with plans to improve its accuracy and address the challenge of AI-generated “hallucinations” before reintroducing the feature.
Apple tests internal ChatGPT-like AI tool for customer support
Apple is conducting internal tests on a new AI tool named “Ask,” designed to enhance the speed and efficiency of technical support provided by AppleCare agents.
The “Ask” tool generates answers to customer technical queries by leveraging Apple’s internal knowledge base, allowing agents to offer accurate, clear, and useful assistance.
Beyond “Ask,” Apple is significantly investing in AI, developing its own large language model framework, “Ajax,” and a chatbot service, “AppleGPT”.
Figure AI’s humanoid robots attract funding from Microsoft, Nvidia, OpenAI, and Jeff Bezos
Jeff Bezos, Nvidia, and other tech giants are investing in Figure AI, a startup developing human-like robots, raising about $675 million at a valuation of roughly $2 billion.
Figure’s robot, named Figure 01, is designed to perform dangerous jobs unsuitable for humans, with the company aiming to address labor shortages.
The investment round, initially seeking $500 million, attracted widespread industry support, including contributions from Microsoft, Amazon-affiliated funds, and venture capital firms, marking a significant push into AI-driven robotics.
A Daily Chronicle of AI Innovations in February 2024 – Day 23: AI Daily News – February 23rd, 2024
Stable Diffusion 3 creates jaw-dropping images from text LongRoPE: Extending LLM context window beyond 2 million token Google Chrome introduces “Help me write” AI feature
Jasper acquires image platform Clipdrop from Stability AI
Suno AI V3 Alpha is redefining music generation.
GPT Store introduces linking profiles, ratings, and enhanced about pages.
Microsoft introduces a generative erase feature for AI-editing photos in Windows 11.
Google cut a deal with Reddit for AI training data.
Stability.AI announced the Stable Diffusion 3 in an early preview. It is a text-to-image model with improved performance in multi-subject prompts, image quality, and spelling abilities. Stability.AI has opened the model waitlist and introduced a preview to gather insights before the open release.
Stability AI’s Stable Diffusion 3 preview has generated significant excitement in the AI community due to its superior image and text generation capabilities. This next-generation image tool promises better text generation, strong prompt adherence, and resistance to prompt leaking, ensuring the generated images match the requested prompts.
Why does it matter?
The announcement of Stable Diffusion 3 is a significant development in AI image generation because it introduces a new architecture with advanced features such as the diffusion transformer and flow matching. The early demos of Stable Diffusion 3 have shown remarkable improvements in overall generation quality, surpassing its competitors such as MidJourney, Dall-E 3, and Google ImageFX.
LongRoPE: Extending LLM context window beyond 2 million tokens
Researchers at Microsoft have introduced LongRoPE, a groundbreaking method that extends the context window of pre-trained large language models (LLMs) to an impressive 2048k tokens.
Current extended context windows are limited to around 128k tokens due to high fine-tuning costs, scarcity of long texts, and catastrophic values introduced by new token positions. LongRoPE overcomes these challenges by leveraging two forms of non-uniformities in positional interpolation, introducing a progressive extension strategy, and readjusting the model on shorter context windows.
Experiments on LLaMA2 and Mistral across various tasks demonstrate the effectiveness of LongRoPE. The extended models retain the original architecture with minor positional embedding modifications and optimizations.
Why does it matter?
LongRoPE extends the context window in LLMs and opens up possibilities for long-context tasks beyond 2 million tokens. This is the highest supported token, especially when other models like Google Gemini Pro have capabilities of up to 1 million tokens. Another major impact it will have is an extended context window for open-source models, unlike top proprietary models.
Google Chrome introduces “Help me write” AI feature
Google has recently rolled out an experimental AI feature called “Help me write” for its Chrome browser. This feature, powered by Gemini, aims to assist users in writing or refining text based on webpage content. It focuses on providing writing suggestions for short-form content, such as filling in digital surveys and reviews and drafting descriptions for items being sold online.
The tool can understand the webpage’s context and pull relevant information into its suggestions, such as highlighting critical features mentioned on a product page for item reviews. Users can right-click on an open text field on any website to access the feature on Google Chrome.
This feature is currently only available for English-speaking Chrome users in the US on Mac and Windows PCs. To access this tool, users in the US can enable Chrome’s Experimental AI under the “Try out experimental AI features” setting.
Why does it matter?
Google Chrome’s “Help me write” AI feature can aid users in completing surveys, writing reviews, and drafting product descriptions. However, it is still in its early stages and may not inspire user confidence compared to Microsoft’s Copilote on Edge browser. Adjusting the prompts and resulting text can negate any time-saving benefits, leaving the effectiveness of this feature for Google Chrome users open for debate.
What Else Is Happening in AI on February 23rd, 2024
Google cut a deal with Reddit for AI training data.
Google and Reddit have formed a partnership that will benefit both companies. Google will pay $60 million per year for real-time access to Reddit’s data, while Reddit will gain access to Google’s Vertex AI platform. This will help Google train its AI and ML models at scale while also giving Reddit expanded access to Google’s services. (Link)
GPT Store introduces linking profiles, ratings, and enhanced about pages.
OpenAI’s GPT Store platform has new features. Builders can link their profiles to GitHub and LinkedIn, and users can leave ratings and feedback. The About pages for GPTs have also been enhanced. T (Link)
Microsoft introduces a generative erase feature for AI-editing photos in Windows 11.
Microsoft’s Photos app now has a Generative Erase feature powered by AI. It enables users to remove unwanted elements from their photos, including backgrounds. The AI edit features are currently available to Windows Insiders, and Microsoft plans to roll out the tools to Windows 10 users. However, there is no clarity on whether AI-edited photos will have watermarks or metadata to differentiate them from unedited photos. (Link)
Suno AI V3 Alpha is redefining music generation.
The V3 Alpha version of Suno AI’s music generation platform offers significant improvements, including better audio quality, longer clip length, and expanded language coverage. The update aims to redefine the state-of-the-art for generative music and invites user feedback with 300 free credits given to paying subscribers as a token of appreciation. (Link)
Jasper acquires image platform Clipdrop from Stability AI
Jasper acquires AI image creation and editing platform Clipdrop from Stability AI, expanding its conversational AI toolkit with visual capabilities for a comprehensive multimodal marketing copilot. The Clipdrop team will work in Paris to contribute to research and innovation on multimodality, furthering Jasper’s vision of being the most all-encompassing end-to-end AI assistant for powering personalized marketing and automation. (Link)
A Daily Chronicle of AI Innovations in February 2024 – Day 22: AI Daily News – February 22nd, 2024
Google suspends Gemini from making AI images after backlash
Google has temporarily halted the ability of its Gemini AI to create images of people following criticisms over its generation of historically inaccurate and racially diverse images, such as those of US Founding Fathers and Nazi-era soldiers.
This decision comes shortly after Google issued an apology for the inaccuracies in some of the historical images generated by Gemini, amid backlash and conspiracy theories regarding the depiction of race and gender.
Google plans to improve Gemini’s image generation capabilities concerning people and intends to re-release an enhanced version of this feature in the near future, aiming for more accurate and sensitive representations.
Nvidia posts revenue up 265% on booming AI business
Nvidia’s data center GPU sales soared by 409% due to a significant increase in demand for AI chips, with the company reporting $18.4 billion in revenue for this segment.
The company exceeded Wall Street’s expectations in its fourth-quarter financial results, projecting $24 billion in sales for the current quarter against analysts’ forecasts of $22.17 billion.
Nvidia has become a key player in the AI industry, with massive demand for its GPUs from tech giants and startups alike, spurred by the growth in generative AI applications.
Microsoft and Intel strike a custom chip deal that could be worth billions
Intel will produce custom chips designed by Microsoft in a deal valued over $15 billion, although the specific applications of these chips remain unspecified.
The chips will utilize Intel’s 18A process, marking a significant step in Intel’s strategy to lead in chip manufacturing by offering foundry services for custom chip designs.
Intel’s move to expand its foundry services and collaborate with Microsoft comes amidst challenges, including the delayed opening of a $20 billion chip plant in Ohio.
AI researchers’ open letter demands action on deepfakes before they destroy democracy
An open letter from AI researchers demands government action to combat deepfakes, highlighting their threat to democracy and proposing measures such as criminalizing deepfake child pornography.
The letter warns about the rapid increase of deepfakes, with a 550% rise between 2019 and 2023, detailing that 98% of deepfake videos are pornographic, predominantly victimizing women.
Signatories, including notable figures like Jaron Lanier and Frances Haugen, advocate for the development and dissemination of content authentication methods to distinguish real from manipulated content.
Stability AI’s Stable Diffusion 3 preview boasts superior image and text generation capabilities
Stability AI introduces Stable Diffusion 3, showcasing enhancements in image generation, complex prompt execution, and text-generation capabilities.
The model incorporates the Diffusion Transformer Architecture with Flow Matching, ranging from 800 million to 8 billion parameters, promising a notable advance in AI-driven content creation.
Despite its potential, Stability AI takes rigorous safety measures to mitigate misuse and collaborates with the community, amidst concerns over training data and the ease of modifying open-source models.
Google has open-sourced Gemma, a new family of state-of-the-art language models available in 2B and 7B parameter sizes. Despite being lightweight enough to run on laptops and desktops, Gemma models have been built with the same technology used for Google’s massive proprietary Gemini models and achieve remarkable performance – the 7B Gemma model outperforms the 13B LLaMA model on many key natural language processing benchmarks.
Alongside the Gemma models, Google has released a Responsible Generative AI Toolkit to assist developers in building safe applications. This includes tools for robust safety classification, debugging model behavior, and implementing best practices for deployment based on Google’s experience. Gemma is available on Google Cloud, Kaggle, Colab, and a few other platforms with incentives like free credits to get started.
AnyGPT: A major step towards artificial general intelligence
Researchers in Shanghai have achieved a breakthrough in AI capabilities with the development of AnyGPT – a new model that can understand and generate data in virtually any modality, including text, speech, images, and music. AnyGPT leverages an innovative discrete representation approach that allows a single underlying language model architecture to smoothly process multiple modalities as inputs and outputs.
The researchers synthesized the AnyInstruct-108k dataset, containing 108,000 samples of multi-turn conversations, to train AnyGPT for these impressive capabilities. Initial experiments show that AnyGPT achieves zero-shot performance comparable to specialized models across various modalities.
Google launches Gemini for Workspace
Google has rebranded its Duet AI for Workspace offering as Gemini for Workspace. This brings the capabilities of Gemini, Google’s most advanced AI model, into Workspace apps like Docs, Sheets, and Slides to help business users be more productive.
The new Gemini add-on comes in two tiers – a Business version for SMBs and an Enterprise version. Both provide AI-powered features like enhanced writing and data analysis, but Enterprise offers more advanced capabilities. Additionally, users get access to a Gemini chatbot to accelerate workflows by answering questions and providing expert advice. This offering pits Google against Microsoft, which has a similar Copilot experience for commercial users.
What Else Is Happening in AI on February 22nd, 2024
Intel lands a $15 billion deal to make chips for Microsoft
Intel will produce over $15 billion worth of custom AI and cloud computing chips designed by Microsoft, using Intel’s cutting-edge 18A manufacturing process. This represents the first major customer for Intel’s foundry services, a key part of CEO Pat Gelsinger’s plan to reestablish the company as an industry leader. (Link)
DeepMind forms new unit to address AI dangers
Google’s DeepMind has created a new AI Safety and Alignment organization, which includes an AGI safety team and other units working to incorporate safeguards into Google’s AI systems. The initial focus is on preventing bad medical advice and bias amplification, though experts believe hallucination issues can never be fully solved. (Link)
Match Group bets on AI to help its workers improve dating apps
Match Group, owner of dating apps like Tinder and Hinge, has signed a deal to use ChatGPT and other AI tools from OpenAI for over 1,000 employees. The AI will help with coding, design, analysis, templates, and communications. All employees using it will undergo training on responsible AI use. (Link)
Fintechs get a new ally against financial crime
Hummingbird, a startup offering tools for financial crime investigations, has launched a new product called Automations. It provides pre-built workflows to help financial investigators automatically gather information on routine crimes like tax evasion, freeing them up to focus on harder cases. Early customer feedback on Automations has been positive. (Link)
Google Play Store tests AI-powered app recommendations
Google is testing a new AI-powered “App Highlights” feature in the Play Store that provides personalized app recommendations based on user preferences and habits. The AI analyzes usage data to suggest relevant, high-quality apps to simplify discovery. (Link)
A Daily Chronicle of AI Innovations in February 2024 – Day 21: AI Daily News – February 21st, 2024
Introducing Gemma by Google – a family of lightweight, state-of-the-art open models for their class
#openmodels 1/n “Gemma open models Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide responsible use of Gemma models… Free credits for research and development Gemma is built for the open community of developers and researchers powering AI innovation. You can start working with Gemma today using free access in Kaggle, a free tier for Collab notebooks, and $300 in credits for first-time Google Cloud users. Researchers can also apply for Google Cloud credits of up to $500,000 to accelerate their projects”.
Gemini 1.5 will be ~20x cheaper than GPT4 – this is an existential threat to OpenAI
From what we have seen so far Gemini 1.5 Pro is reasonably competitive with GPT4 in benchmarks, and the 1M context length and in-context learning abilities are astonishing.
What hasn’t been discussed much is pricing. Google hasn’t announced specific number for 1.5 yet but we can make an educated projection based on the paper and pricing for 1.0 Pro.
Google describes 1.5 as highly compute-efficient, in part due to the shift to a soft MoE architecture. I.e. only a small subset of the experts comprising the model need to be inferenced at a given time. This is a major improvement in efficiency from a dense model in Gemini 1.0.
And though it doesn’t specifically discuss architectural decisions for attention the paper mentions related work on deeply sub-quadratic attention mechanisms enabling long context (e.g. Ring Attention) in discussing Gemini’s achievement of 1-10M tokens. So we can infer that inference costs for long context are relatively manageable. And videos of prompts with ~1M context taking a minute to complete strongly suggest that this is the case barring Google throwing an entire TPU pod at inferencing an instance.
Putting this together we can reasonably expect that pricing for 1.5 Pro should be similar to 1.0 Pro. Pricing for 1.0 Pro is $0.000125 / 1K characters.
Compare that to $0.01 / 1K tokens for GPT4-Turbo. Rule of thumb is about 4 characters / token, so that’s $0.0005 for 1.5 Pro vs $0.01 for GPT-4, or a 20x difference in Gemini’s favor.
So Google will be providing a model that is arguably superior to GPT4 overall at a price similar to GPT-3.5.
If OpenAI isn’t able to respond with a better and/or more efficient model soon Google will own the API market, and that is OpenAI’s main revenue stream.
Adobe launched an AI assistant feature in its Acrobat software to help users navigate documents. It summarizes content, answers questions, and generates formatted overviews. The chatbot aims to save time working with long files and complex information. Additionally, Adobe created a dedicated 50-person AI research team called CAVA (Co-Creation for Audio, Video, & Animation) focused on advancing generative video, animation, and audio creation tools.
While Adobe already has some generative image capabilities, CAVA signals a push into underserved areas like procedurally assisted video editing. The research group will explore integrating Adobe’s existing creative tools with techniques like text-to-video generation. Adobe prioritizes more AI-powered features to boost productivity through faster document understanding or more automated creative workflows.
Why does this matter?
Adobe injecting AI into PDF software and standing up an AI research group signals a strategic push to lead in generative multimedia. Features like summarizing documents offer faster results, while envisaged video/animation creation tools could redefine workflows.
Meta released Aria recordings to fuel smart speech recognition
Meta has released a multi-modal dataset of two-person conversations captured on Aria smart glasses. It contains audio across 7 microphones, video, motion sensors, and annotations. The glasses were worn by one participant while speaking spontaneously with another compensated contributor.
The dataset aims to advance research in areas like speech recognition, speaker ID, and translation for augmented reality interfaces. Its audio, visual, and motion signals together provide a rich capture of natural talking that could help train AI models. Such in-context glasses conversations can enable closed captioning and real-time language translation.
Why does this matter?
By capturing real-world sensory signals from glasses-framed conversations, Meta bridges the gaps AI faces to achieve human judgment. Enterprises stand to gain more relatable, trustworthy AI helpers that feel less robotic and more attuned to nuances when engaging customers or executives.
Penn engineers have developed a photonic chip that uses light waves for complex mathematics. It combines optical computing research by Professor Nader Engheta with nanoscale silicon photonics technology pioneered by Professor Firooz Aflatouni. With this unified platform, neural networks can be trained and inferred faster than ever.
It allows accelerated AI computations with low power consumption and high performance. The design is ready for commercial production, including integration into graphics cards for AI development. Additional advantages include parallel processing without sensitive data storage. The development of this photonic chip represents significant progress for AI by overcoming conventional electronic limitations.
Why does this matter?
Artificial intelligence chips enable accelerated training and inference for new data insights, new products, and even new business models. Businesses that upgrade key AI infrastructure like GPUs with photonic add-ons will be able to develop algorithms with significantly improved accuracy. With processing at light speed, enterprises have an opportunity to avoid slowdowns by evolving along with light-based AI.
What Else Is Happening in AI on February 21st, 2024
Brain chip: Neuralink patient moves mouse with thoughts
Elon Musk announced that the first human to receive a Neuralink brain chip has recovered successfully. The patient can now move a computer mouse cursor on a screen just by thinking, showing the chip’s ability to read brain signals and control external devices. (Link)
Microsoft develops server network cards to replace NVIDIA
Microsoft is developing its own networking cards. These cards move data quickly between servers, seeking to reduce reliance on NVIDIA’s cards and lower costs. Microsoft hopes its new server cards will boost the performance of the NVIDIA chip server currently in use and its own Maia AI chips. (Link)
Wipro and IBM team up to accelerate enterprise AI
Wipro and IBM are expanding their partnership, introducing the Wipro Enterprise AI-Ready Platform. Using IBM Watsonx AI, clients can create fully integrated AI environments. This platform provides tools, language models, streamlined processes, and governance, focusing on industry-specific solutions to advance enterprise-level AI. (Link)
Telekom’s next big thing: an app-free AI Phone
Deutsche Telekom revealed an AI-powered app-free phone concept at MWC 2024, featuring a digital assistant that can fulfill daily tasks via voice and text. Created in partnership with Qualcomm and Brain.ai, the concierge-style interface aims to simplify life by anticipating user needs contextually using generative AI. (Link)
Tinder fights back against AI dating scams
Tinder is expanding ID verification, requiring a driver’s license and video selfie to combat rising AI-powered scams and dating crimes. The new safeguards aim to build trust, authenticity, and safety, addressing issues like pig butchering schemes using AI-generated images to trick victims. (Link)
Google launches two new AI models
Google has unveiled Gemma 2B and 7B, two new open-source AI models derived from its larger Gemini model, aiming to provide developers more freedom for smaller applications such as simple chatbots or summarizations.
Gemma models, despite being smaller, are designed to be efficient and cost-effective, boasting significant performance on key benchmarks which allows them to run on personal computing devices.
Unlike the closed Gemini model, Gemma is open source, making it accessible for a wider range of experimentation and development, and comes with a ‘responsible AI toolkit’ to help manage its open nature.
ChatGPT has meltdown and starts sending alarming messages to users
ChatGPT has started malfunctioning, producing incoherent responses, mixing Spanish and English without prompt, and unsettling users by implying physical presence in their environment.
The cause of the malfunction remains unclear, though OpenAI acknowledges the issue and is actively monitoring the situation, as evidenced by user-reported anomalies and official statements on their status page.
Some users speculate that the erratic behavior may relate to the “temperature” setting of ChatGPT, which affects its creativity and focus, noting previous instances where ChatGPT’s responses became unexpectedly lazy or sassy.
An Apple smart ring may be imminent
After years of research and filing several patent applications, Apple is reportedly close to launching a smart ring, spurred by Samsung’s tease of its own smart ring.
The global smart ring market is expected to grow significantly, from $20 million in 2023 to almost $200 million by 2031, highlighting potential interest in health-monitoring wearable tech.
Despite the lack of credible rumors or leaks, the number of patents filed by Apple suggests its smart ring development is advanced.
New hack clones fingerprints by listening to fingers swipe screens
Researchers from the US and China developed a method, called PrintListener, to recreate fingerprints from the sound of swiping on a touchscreen, posing a risk to biometric security systems.
PrintListener can achieve partial and full fingerprint reconstruction from fingertip friction sounds, with success rates of 27.9% and 9.3% respectively, demonstrating the technique’s potential threat.
To mitigate risks, suggested countermeasures include using specialized screen protectors or altering interaction with screens, amid concerns over fingerprint biometrics market’s projected growth to $75 billion by 2032.
iMessage gets major update ahead of ‘quantum apocalypse’
Apple is launching a significant security update in iMessage to protect against the potential threat of quantum computing, termed the “quantum apocalypse.”
The update, known as PQ3, aims to secure iMessage conversations against both classical and quantum computing threats by redefining encryption protocols.
Other companies, like Google, are also updating their security measures in anticipation of quantum computing challenges, with efforts being coordinated by the US National Institute of Standards and Technology (NIST).
A Daily Chronicle of AI Innovations in February 2024 – Day 20: AI Daily News – February 20th, 2024
Sora Explained in Layman terms
Sora, an AI model, combines Transformer techniques, which power language models like GPT, with diffusion techniques to predict words and generate sentences and to predict colors and transform fuzzy canvases into coherent images, respectively.
When a text prompt is inputted into Sora, it first employs a Transformer to extrapolate a more detailed video script from the given prompt. This script includes specific details such as camera angles, textures, and animations inferred from the text.
The generated video script is then passed to the diffusion side of Sora, where the actual video output is created. Historically, diffusion was only capable of producing images, but Sora overcame this limitation by introducing a new technique called SpaceTime patches.
SpaceTime patches act as an intermediary step between the Transformer and diffusion processes. They essentially break down the video into smaller pieces and analyze the pixel changes within each patch to learn about animation and physics.
While computers don’t truly understand motion, they excel at predicting patterns, such as changes in pixel colors across frames. Sora was pre-trained to understand the animation of falling objects by learning from various videos depicting downward motion.
By leveraging SpaceTime patches and diffusion, Sora can predict and apply the necessary color changes to transform a fuzzy video into the desired output. This approach is highly flexible and can accommodate videos of any format, making Sora a versatile and powerful tool for video production.
Sora’s ability to seamlessly integrate Transformer and diffusion techniques, along with its innovative use of SpaceTime patches, allows it to effectively translate text prompts into captivating and visually stunning videos. This remarkable AI creation has truly revolutionized the world of video production.
Groq’s New AI Chip Outperforms ChatGPT
Groq has developed a special AI hardware known as the first-ever Language Processing Unit (LPU) that aims to increase the processing power of current AI models that normally work on GPU. These LPUs can process up to 500 tokens/second, far superior to Gemini Pro and ChatGPT-3.5, which can only process between 30 and 50 tokens/second.
The company has designed its first-ever LPU-based AI chip named “GroqChip,” which uses a “tensor streaming architecture” that is less complex than traditional GPUs, enabling lower latency and higher throughput. This makes the chip a suitable candidate for real-time AI applications such as live-streaming sports or gaming.
Why does it matter?
Groq’s AI chip is the first-ever chip of its kind designed in the LPU system category. The LPUs developed by Groq can improve the deployment of AI applications and could present an alternative to Nvidia’s A100 and H100 chips, which are in high demand but have massive shortages in supply. It also signifies advancements in hardware technology specifically tailored for AI tasks. Lastly, it could stimulate further research and investment in AI chip design.
BABILong: The new benchmark to assess LLMs for long docs
The research paper delves into the limitations of current generative transformer models like GPT-4 when tasked with processing lengthy documents. It identifies a significant GPT-4 and RAG dependency on the initial 25% of input, indicating potential for enhancement. To address this, the authors propose leveraging recurrent memory augmentation within the transformer model to achieve superior performance.
Introducing a new benchmark called BABILong (Benchmark for Artificial Intelligence for Long-context evaluation), the study evaluates GPT-4, RAG, and RMT (Recurrent Memory Transformer). Results demonstrate that conventional methods prove effective only for sequences up to 10^4 elements, while fine-tuning GPT-2 with recurrent memory augmentations enables handling tasks involving up to 10^7 elements, highlighting its significant advantage.
Why does it matter?
The recurrent memory allows AI researchers and enthusiasts to overcome the limitations of current LLMs and RAG systems. Also, the BABILong benchmark will help in future studies, encouraging innovation towards a more comprehensive understanding of lengthy sequences.
Standford’s AI model identifies sex from brain scans with 90% accuracy
Standford medical researchers have developed a new-age AI model that determines the sex of individuals based on brain scans, with over 90% success. The AI model focuses on dynamic MRI scans, identifying specific brain networks—such as the default mode, striatum, and limbic networks—as critical in distinguishing male from female brains.
Why does it matter?
Over the years, there has been a constant debate in the medical field and neuroscience about whether sex differences in brain organization exist. AI has hopefully ended the debate once and for all. The research acknowledges that sex differences in brain organization are vital for developing targeted treatments for neuropsychiatric conditions, paving the way for a personalized medicine approach.
What Else Is Happening in AI on February 20th, 2024
Microsoft to invest $2.1 billion for AI infrastructure expansion in Spain.
Microsoft Vice Chair and President Brad Smith announced on X that they will expand their AI and cloud computing infrastructure in Spain via a $2.1 billion investment in the next two years. This announcement follows the $3.45 billion investment in Germany for the AI infrastructure, showing the priority of the tech giant in the AI space. (Link)
Graphcore explores sales talk with OpenAI, Softbank, and Arm.
The British AI chipmaker and NVIDIA competitor Graphcore is struggling to raise funding from investors and is seeking a $500 billion deal with potential purchasers like OpenAI, Softbank, and Arm. This move comes despite raising $700 million from investors Microsoft and Sequoia, which are valued at $2.8 billion as of late 2020. (Link)
OpenAI’s Sora can craft impressive video collages
One of OpenAI’s employees, Bill Peebles, demonstrated Sora’s (the new text-to-video generator from OpenAI) prowess in generating multiple videos simultaneously. He shared the demonstration via a post on X, showcasing five different angles of the same video and how Sora stitched those together to craft an impressive video collage while keeping quality intact. (Link)
US FTC proposes a prohibition law on AI impersonation
The US Federal Trade Commission (FTC) proposed a rule prohibiting AI impersonation of individuals. The rule was already in place for US governments and US businesses. Now, it has been extended to individuals to protect their privacy and reduce fraud activities through the medium of technology, as we have seen with the emergence of AI-generated deep fakes. (Link)
Meizu bid farewell to the smartphone market; shifts focus on AI
Meizu, a China-based consumer electronics brand, has decided to exit the smartphone manufacturing market after 17 years in the industry. The move comes after the company shifted its focus to AI with the ‘All-in-AI’ campaign. Meizu is working on an AI-based operating system, which will be released later this year, and a hardware terminal for all LLMs. (Link)
Groq has created the world’s fastest AI
Groq, a startup, has developed special AI hardware called “Language Processing Unit” (LPU) to run language models, achieving speeds of up to 500 tokens per second, significantly outpacing current LLMs like Gemini Pro and GPT-3.5.
The “GroqChip,” utilizing a tensor streaming architecture, offers improved performance, efficiency, and accuracy for real-time AI applications by ensuring constant latency and throughput.
While LPUs provide a fast and energy-efficient alternative for AI inference tasks, training AI models still requires traditional GPUs, with Groq offering hardware sales and a cloud API for integration into AI projects.
Mistral’s next LLM could rival GPT-4, and you can try it now
Mistral, a French AI startup, has launched its latest language model, “Mistral Next,” which is available for testing in chatbot arenas and might rival GPT-4 in capabilities.
The new model is classified as “Large,” suggesting it is the startup’s most extensive model to date, aiming to compete with OpenAI’s GPT-4, and has received positive feedback from early testers on the “X” platform.
Mistral AI has gained recognition in the open-source community for its Mixtral 8x7B language model, designed similarly to GPT-4, and recently secured €385 million in funding from notable venture capital firms.
Neuralink’s first human patient controls mouse with thoughts
Neuralink’s first human patient, implanted with the company’s N1 brain chip, can now control a mouse cursor with their thoughts following a successful procedure.
Elon Musk, CEO of Neuralink, announced the patient has fully recovered without any adverse effects and is working towards achieving the ability to click the mouse telepathically.
Neuralink aims to enable individuals, particularly those with quadriplegia or ALS, to operate computers using their minds, using a chip that is both powerful and designed to be cosmetically invisible.
Adobe launches AI assistant that can search and summarize PDFs
Adobe introduced an AI assistant in its Reader and Acrobat applications that can generate summaries, answer questions, and provide suggestions on PDFs and other documents, aiming to streamline information digestion.
The AI assistant, presently in beta phase, is integrated directly into Acrobat with imminent availability in Reader, and Adobe intends to introduce a paid subscription model for the tool post-beta.
Adobe’s AI assistant distinguishes itself by being a built-in feature that can produce overviews, assist with conversational queries, generate verifiable citations, and facilitate content creation for various formats without the need for uploading PDFs.
LockBit ransomware group taken down in multinational operation
LockBit’s website was seized and its operations disrupted by a joint task force including the FBI and NCA under “Operation Cronos,” impacting the group’s ransomware activities and dark web presence.
The operation led to the seizure of LockBit’s administration environment and leak site, with plans to use the platform to expose the operations and capabilities of LockBit through information bulletins.
A PHP exploit deployed by the FBI played a significant role in undermining LockBit’s operations, according to statements from law enforcement and the group’s supposed ringleader, with the operation also resulting in charges against two Russian nationals.
A Daily Chronicle of AI Innovations in February 2024 – Day 19: AI Daily News – February 19th, 2024
NVIDIA’s new dataset sharpens LLMs in math
NVIDIA has released OpenMathInstruct-1, an open-source math instruction tuning dataset with 1.8M problem-solution pairs. OpenMathInstruct-1 is a high-quality, synthetically generated dataset 4x bigger than previous ones and does NOT use GPT-4. The dataset is constructed by synthesizing code-interpreter solutions for GSM8K and MATH, two popular math reasoning benchmarks, using the Mixtral model.
The best model, OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves a score of 84.6% on GSM8K and 50.7% on MATH, which is competitive with the best gpt-distilled models.
Why does this matter?
The dataset improves open-source LLMs for math, bridging the gap with closed-source models. It also uses better-licensed models, such as from Mistral AI. It is likely to impact AI research significantly, fostering advancements in LLMs’ mathematical reasoning through open-source collaboration.
Apple is working on AI updates to Spotlight and Xcode
Apple has expanded internal testing of new generative AI features for its Xcode programming software and plans to release them to third-party developers this year.
Furthermore, it is looking at potential uses for generative AI in consumer-facing products, like automatic playlist creation in Apple Music, slideshows in Keynote, or Spotlight search. AI chatbot-like search features for Spotlight could let iOS and macOS users make natural language requests, like with ChatGPT, to get weather reports or operate features deep within apps.
Why does this matter?
Apple’s statements about generative AI have been conservative compared to its counterparts. But AI updates to Xcode hint at giving competition to Microsoft’s GitHub Copilot. Apple has also released MLX to train AI models on Apple silicon chips easily, a text-to-image editing AI MGIE, and AI animator Keyframer.
Google open-sources Magika, its AI-powered file-type identifier
Google has open-sourced Magika, its AI-powered file-type identification system, to help others accurately detect binary and textual file types. Magika employs a custom, highly optimized deep-learning model, enabling precise file identification within milliseconds, even when running on a CPU.
Magika, thanks to its AI model and large training dataset, is able to outperform other existing tools by about 20%. It has greater performance gains on textual files, including code files and configuration files that other tools can struggle with.
Internally, Magika is used at scale to help improve Google users’ safety by routing Gmail, Drive, and Safe Browsing files to the proper security and content policy scanners.
Why does this matter?
Today, web browsers, code editors, and countless other software rely on file-type detection to decide how to properly render a file. Accurate identification is notoriously difficult because each file format has a different structure or no structure at all. Magika ditches current tedious and error-prone methods for robust and faster AI. It improves security with resilience to ever-evolving threats, enhancing software’s user safety and functionality.
SoftBank to build a $100B AI chip venture
SoftBank’s Masayoshi Son is seeking $100 billion to create a new AI chip venture, aiming to compete with industry leader Nvidia.
The new venture, named Izanagi, will collaborate with Arm, a company SoftBank spun out but still owns about 90% of, to enter the AI chip market.
SoftBank plans to raise $70 billion of the venture’s funding from Middle Eastern institutional investors, contributing the remaining $30 billion itself.
Reddit has a new AI training deal to sell user content
Reddit has entered into a $60 million annual contract with a large AI company to allow the use of its social media platform’s content for AI training as it prepares for a potential IPO.
The deal could set a precedent for similar future agreements and is part of Reddit’s efforts to leverage AI technology to attract investors for its advised $5 billion IPO valuation.
Reddit’s revenue increased to more than $800 million last year, showing a 20% growth from 2022, as the company moves closer to launching its IPO, possibly as early as next month.
Air Canada chatbot promised a discount. Now the airline has to pay it.
A British Columbia resident was misled by an Air Canada chatbot into believing he would receive a discount under the airline’s bereavement policy for a last-minute flight booked due to a family tragedy.
Air Canada argued that the chatbot was a separate legal entity and not responsible for providing incorrect information about its bereavement policy, which led to a dispute over accountability.
The Canadian civil-resolutions tribunal ruled in favor of the customer, emphasizing that Air Canada is responsible for all information provided on its website, including that from a chatbot.
Apple faces €500m fine from EU over Spotify complaint
Apple is facing a reported $539 million fine as a result of an EU investigation into Spotify’s antitrust complaint, which alleges Apple’s policies restrict competition by preventing apps from offering cheaper alternatives to its music service.
The fine originates from Spotify’s 2019 complaint about Apple’s App Store policies, specifically the restriction on developers linking to their own subscription services, a policy Apple modified in 2022 following regulatory feedback from Japan.
While the fine amounts to $539 million, discussions initially suggested Apple could face penalties nearing $40 billion, highlighting a significant reduction from the potential maximum based on Apple’s global annual turnover.
What Else Is Happening in AI on February 19th, 2024
SoftBank’s founder is seeking about $100 billion for an AI chip venture.
SoftBank’s founder, Masayoshi Son, envisions creating a company that can complement the chip design unit Arm Holdings Plc. The AI chip venture is code-named Izanag and will allow him to build an AI chip powerhouse, competing with Nvidia and supplying semiconductors essential for AI. (Link)
ElevenLabs teases a new AI sound effects feature.
The popular AI voice startup teased a new feature allowing users to generate sounds via text prompts. It showcased the outputs of this feature with OpenAI’s Sora demos on X. (Link)
NBA commissioner Adam Silver demonstrates NB-AI concept.
Adam Silver demoed a potential future for how NBA fans will use AI to watch basketball action. The proposed interface is named NB-AI and was unveiled at the league’s Tech Summit on Friday. Check out the demo here! (Link)
Reddit signs AI content licensing deal ahead of IPO.
Reddit Inc. has signed a contract allowing a company to train its AI models on its content. Reddit told prospective investors in its IPO that it had signed the deal, worth about $60 million on an annualized basis, earlier this year. This deal with an unnamed large AI company could be a model for future contracts of similar nature. (Link)
Mistral quietly released a new model in testing called ‘next’.
Early users testing the model are reporting capabilities that meet or surpass GPT-4. A user writes, ‘it bests gpt-4 at reasoning and has mistral’s characteristic conciseness’. It could be a milestone in open source if early tests hold up. (Link)
A Daily Chronicle of AI Innovations in February 2024 – Day 14: AI Daily News – February 14th, 2024
Nvidia launches offline AI chatbot trainable on local data
NVIDIA has released Chat with RTX, a new tool allowing users to create customized AI chatbots powered by their own local data on Windows PCs equipped with GeForce RTX GPUs. Users can rapidly build chatbots that provide quick, relevant answers to queries by connecting the software to files, videos, and other personal content stored locally on their devices.
Features of Chat with RTX include support for multiple data formats (text, PDFs, video, etc.), access to LLM like Mistral, running offline for privacy, and fast performance via RTX GPUs. From personalized recommendations based on influencing videos to extracting answers from personal notes or archives, there are many potential applications.
Why does this matter?
OpenAI and its cloud-based approach now face fresh competition from this Nvidia offering as it lets solopreneurs develop more tailored workflows. It shows how AI can become more personalized, controllable, and accessible right on local devices. Instead of relying solely on generic cloud services, businesses can now customize chatbots with confidential data for targeted assistance.
OpenAI is testing a memory capability for ChatGPT to recall details from past conversations to provide more helpful and personalized responses. Users can explicitly tell ChatGPT what memories to remember or delete conversationally or via settings. Over time, ChatGPT will provide increasingly relevant suggestions based on users preferences, so they don’t have to repeat them.
This feature is rolled out to only a few Free and Plus users and OpenAI will share broader plans soon. OpenAI also states memories bring added privacy considerations, so sensitive data won’t be proactively retained without permission.
Why does this matter?
ChatGPT’s memory feature allows for more personalized, contextually-aware interactions. Its ability to recall specifics from entire conversations brings AI assistants one step closer to feeling like cooperative partners, not just neutral tools. For companies, remembering user preferences increases efficiency, while individuals may find improved relationships with AI companions.
Cohere has launched Aya, a new open-source LLM supporting 101 languages, over twice as many as existing models support. Backed by the large dataset covering lesser resourced languages, Aya aims to unlock AI potential for overlooked cultures. Benchmarking shows Aya significantly outperforms other open-source massively multilingual models.
The release tackles the data scarcity outside of English training content that limits AI progress. By providing rare non-English fine-tuning demonstrations, it enables customization in 50+ previously unsupported languages. Experts emphasize that Aya represents a crucial step toward preserving linguistic diversity.
Why does this matter?
With over 100 languages supported, more communities globally can benefit from generative models tailored to their cultural contexts. It also signifies an ethical shift: recognizing AI’s real-world impact requires serving people inclusively. Models like Aya, trained on diverse data, inch us toward AI that can help everyone.
Mark Zuckerberg, CEO of Meta, stated on Instagram that he believes the Quest 3 headset is not only a better value but also a superior product compared to Apple’s Vision Pro.
Zuckerberg emphasized the Quest 3’s advantages over the Vision Pro, including its lighter weight, lack of a wired battery pack for greater motion, a wider field of view, and a more immersive content library.
While acknowledging the Vision Pro’s strength as an entertainment device, Zuckerberg highlighted the Quest 3’s significant cost benefit, being “like seven times less expensive” than the Vision Pro.
Slack is getting a major Gen AI boost
Slack is introducing AI features allowing for summaries of threads, channel recaps, and the answering of work-related questions, initially available as a paid add-on for Slack Enterprise users.
The AI tool enables summarization of unread messages or messages from a specified timeframe and allows users to ask questions about workplace projects or policies based on previous Slack messages.
Slack is expanding its AI capabilities to integrate with other applications, summarizing external documents and building a new digest feature to highlight important messages, with a focus on keeping customer data private and siloed.
Microsoft and OpenAI claim hackers are using generative AI to improve cyberattacks
Russia, China, and other nations are leveraging the latest artificial intelligence tools to enhance hacking capabilities and identify new espionage targets, based on a report from Microsoft and OpenAI.
The report highlights the association of AI use with specific hacking groups from China, Russia, Iran, and North Korea, marking a first in identifying such ties to government-sponsored cyber activities.
Microsoft has taken steps to block these groups’ access to AI tools like OpenAI’s ChatGPT, aiming to curb their ability to conduct espionage and cyberattacks, despite challenges in completely stopping such activities.
Apple researchers unveil ‘Keyframer’, a new AI tool
Apple researchers have introduced “Keyframer,” an AI tool using large language models (LLMs) to animate still images with natural language prompts.
“Keyframer” can generate CSS animation code from text prompts and allows users to refine animations by editing the code or adding prompts, enhancing the creative process.
The tool aims to democratize animation, making it accessible to non-experts and indicating a shift towards AI-assisted creative processes in various industries.
Sam Altman at WGS on GPT-5: “The thing that will really matter: It’s gonna be smarter.” The Holy Grail.
we’re moving from memory to reason. logic and reasoning are the foundation of both human and artificial intelligence. it’s about figuring things out. our ai engineers and entrepreneurs finally get this! stronger logic and reasoning algorithms will easily solve alignment and hallucinations for us. but that’s just the beginning.
logic and reasoning tell us that we human beings value three things above all; happiness, health and goodness. this is what our life is most about. this is what we most want for the people we love and care about.
so, yes, ais will be making amazing discoveries in science and medicine over these next few years because of their much stronger logic and reasoning algorithms. much smarter ais endowed with much stronger logic and reasoning algorithms will make us humans much more productive, generating trillions of dollars in new wealth over the next 6 years. we will end poverty, end factory farming, stop aborting as many lives each year as die of all other cause combined, and reverse climate change.
but our greatest achievement, and we can do this in a few years rather than in a few decades, is to make everyone on the planet much happier and much healthier, and a much better person. superlogical ais will teach us how to evolve into what will essentially be a new human species. it will develop safe pharmaceuticals that make us much happier, and much kinder. it will create medicines that not only cure, but also prevent, diseases like cancer. it will allow us all to live much longer, healthier lives. ais will create a paradise for everyone on the planet. and it won’t take longer than 10 years for all of this to happen.
what it may not do, simply because it probably won’t be necessary, is make us all much smarter. it will be doing all of our deepest thinking for us, freeing us to enjoy our lives like never before. we humans are hardwired to seek pleasure and avoid pain. most fundamentally that is who we are. we’re almost there.
OpenAI and Microsoft Disrupt Malicious AI Use by State-Affiliated Threat Actors
OpenAI and Microsoft have teamed up to identify and disrupt operations of five state-affiliated malicious groups using AI for cyber threats, aiming to secure digital ecosystems and promote AI safety.
OpenAI is jumping into one of the hottest areas of artificial intelligence: autonomous agents.
Microsoft-backed OpenAI is working on a type of agent software to automate complex tasks by taking over a users’ device, The Information reported on Wednesday, citing a person with knowledge on the matter. The agent software will handle web-based tasks such as gathering public data about a set of companies, creating itineraries or booking flight tickets, according to the report. The new assistants – often called “agents” – promise to perform more complex personal and work tasks when commanded to by a human, without needing close supervision.
What Else Is Happening in AI on February 14th, 2024
Nous Research released 1M-Entry 70B Llama-2 model with advanced steerability
Nous Research has released its largest model yet – Nous Hermes 2 Llama-2 70B – trained on over 1 million entries of primarily synthetic GPT-4 generated data. The model uses a more structured ChatML prompt format compatible with OpenAI, enabling advanced multi-turn chat dialogues. (Link)
Otter launches AI meeting buddy that can catch up on meetings
Otter has introduced a new feature for its AI chatbot to query past transcripts, in-channel team conversations, and auto-generated overviews. This AI suite aims to outperform and replace competitors’ paid offerings like Microsoft, Zoom and Google by simplifying recall and productivity for users leveraging Otter’s complete meeting data. (Link)
OpenAI CEO forecasts smarter multitasking GPT-5
At the World Government Summit, OpenAI CEO Sam Altman remarked that the upcoming GPT-5 model will be smarter, faster, more multimodal, and better at everything across the board due to its generality. There are rumors that GPT-5 could be a multimodal AI called “Gobi” slated for release in spring 2024 after training on a massive dataset. (Link)
ElevenLabs announced expansion for its speech to speech in 29 languages
ElevenLabs’s Speech to Speech is now available in 29 languages, making it multilingual. The tool, launched in November, lets users transform their voice into another character with full control over emotions, timing, and delivery by prompting alone. This update just made it more inclusive! (Link)
Airbnb plans to build ‘most innovative AI interfaces ever
Airbnb plans to leverage AI, including its recent acquisition of stealth startup GamePlanner, to evolve its interface into an adaptive “ultimate concierge”. Airbnb executives believe the generative models themselves are underutilized and want to focus on improving the AI application layer to deliver more personalized, cross-category services. (Link)
A Daily Chronicle of AI Innovations in February 2024 – Day 13: AI Daily News – February 13th, 2024
How LLMs are built?
ChatGPT adds ability to remember things you discussed. Rolling out now to a small portion of users
NVIDIA CEO says computers will pass any test a human can within 6 years
NVIDIA CEO Jensen Huang says computers will pass any test a human can by the end of this decade pic.twitter.com/nThVio1wwq
The Tencent Research Team has released a paper claiming that the performance of language models can be significantly improved by simply increasing the number of agents. The researchers use a “sampling-and-voting” method in which the input task is fed multiple times into a language model with multiple language model agents to produce results. After that, majority voting is applied to these answers to determine the final answer.
The researchers prove this methodology by experimenting with different datasets and tasks, showing that the performance of language models increases with the size of the ensemble, i.e., with the number of agents (results below). They also established that even smaller LLMs can match/outperform their larger counterparts by scaling the number of agents. (Example below)
Why does it matter?
Using multiple agents to boost LLM performance is a fresh tactic to tackle single models’ inherent limitations and biases. This method eliminates the need for complicated methods such as chain-of-thought prompting. While it is not a silver bullet, it can be combined with existing complicated methods that stimulate the potential of LLMs and enhance them to achieve further performance improvements.
Google DeepMind’s MC-ViT understands long-context video
Researchers from Google DeepMind and the University of Cornell have combined to develop a method allowing AI-based systems to understand longer videos better. Currently, most AI-based models can comprehend videos for up to a short duration due to the complexity and computing power.
That’s where MC-ViT aims to make a difference, as it can store a compressed “memory” of past video segments, allowing the model to reference past events efficiently. Human memory consolidation theories inspire this method by combining neuroscience and psychology. The MC-ViT method provides state-of-the-art action recognition and question answering despite using fewer resources.
Why does it matter?
Most video encoders based on transformers struggle with processing long sequences due to their complex nature. Efforts to address this often add complexity and slow things down. MC-ViT offers a simpler way to handle longer videos without major architectural changes.
ElevenLabs lets you turn your voice into passive income
ElevenLabs has developed an AI voice cloning model that allows you to turn your voice into passive income. Users must sign up for their “Voice Actor Payouts” program.
After creating the account, upload a 30-minute audio of your voice. The cloning model will create your professional voice clone with AI that resembles your original voice. You can then share it in Voice Library to make it available to the growing community of ElevenLabs.
After that, whenever someone uses your professional voice clone, you will get a cash or character reward according to your requirements. You can also decide on a rate for your voice usage by opting for a standard royalty program or setting a custom rate.
Why does it matter?
By leveraging ElevenLabs’ AI voice cloning, users can potentially monetize their voices in various ways, such as providing narration for audiobooks, voicing virtual assistants, or even lending their voices to advertising campaigns. This innovation democratizes the field of voice acting, making it accessible to a broader audience beyond professional actors and voiceover artists. Additionally, it reflects the growing influence of AI in reshaping traditional industries.
What Else Is Happening in AI on February 13th, 2024
NVIDIA CEO Jensen Huang advocates for each country’s sovereign AI
While speaking at the World Governments Summit in Dubai, the NVIDIA CEO strongly advocated the need for sovereign AI. He said, “Every country needs to own the production of their own intelligence.” He further added, “It codifies your culture, your society’s intelligence, your common sense, your history – you own your own data.” (Link)
Google to invest €25 million in Europe to uplift AI skills
Google has pledged 25 million euros to help the people of Europe learn how to use AI. With this funding, Google wants to develop various social enterprise and nonprofit applications. The tech giant is also looking to run “growth academies” to support companies using AI to scale their companies and has expanded its free online AI training courses to 18 languages. (Link)
NVIDIA surpasses Amazon in market value
NVIDIA Corp. briefly surpassed Amazon.com Inc. in market value on Monday. Nvidia rose almost 0.2%, closing with a market value of about $1.78 trillion. While Amazon fell 1.2%, it ended with a closing valuation of $1.79 trillion. With this market value, NVIDIA Corp. temporarily became the 4th most valuable US-listed company behind Alphabet, Microsoft, and Apple. (Link)
Microsoft might develop an AI upscaling feature for Windows 11
Microsoft may release an AI upscaling feature for PC gaming on Windows 11, similar to Nvidia’s Deep Learning Super Sampling (DLSS) technology. The “Automatic Super Resolution” feature, which an X user spotted in the latest test version of Windows 11, uses AI to improve supported games’ frame rates and image detail. Microsoft is yet to announce the news or hardware specifics, if any. (Link)
Fandom rolls out controversial generative AI features
Fandom hosts wikis for many fandoms and has rolled out many generative AI features. However, some features like “Quick Answers” have sparked a controversy. Quick Answers generates a Q&A-style dropdown that distills information into a bite-sized sentence. Wiki creators have complained that it answers fan questions inaccurately, thereby hampering user trust. (Link)
Sam Altman warns that ‘societal misalignments’ could make AI dangerous
OpenAI CEO Sam Altman expressed concerns at the World Governments Summit about the potential for ‘societal misalignments’ caused by artificial intelligence, emphasizing the need for international oversight similar to the International Atomic Energy Agency.
Altman highlighted the importance of not focusing solely on the dramatic scenarios like killer robots but on the subtle ways AI could unintentionally cause societal harm, advocating for regulatory measures not led by the AI industry itself.
Despite the challenges, Altman remains optimistic about the future of AI, comparing its current state to the early days of mobile technology, and anticipates significant advancements and improvements in the coming years.
SpaceX plans to deorbit 100 Starlink satellites due to potential flaw
SpaceX plans to deorbit 100 first-generation Starlink satellites due to a potential flaw to prevent them from failing, with the process designed to ensure they burn up safely in the Earth’s atmosphere without posing a risk.
The deorbiting operation will not impact Starlink customers, as the network still has over 5,400 operational satellites, demonstrating SpaceX’s dedication to space sustainability and minimizing orbital hazards.
SpaceX has implemented an ‘autonomous collision avoidance’ system and ion thrusters in its satellites for maneuverability, and has a policy of deorbiting satellites within five years or less to avoid becoming a space risk, with 406 satellites already deorbited.
Nvidia unveils tool for running GenAI on PCs
Nvidia is releasing a tool named “Chat with RTX” that enables owners of GeForce RTX 30 Series and 40 Series graphics cards to run an AI-powered chatbot offline on Windows PCs.
“Chat with RTX” allows customization of GenAI models with personal documents for querying, supporting multiple text formats and even YouTube playlist transcriptions.
Despite its limitations, such as inability to remember context and variable response relevance, “Chat with RTX” represents a growing trend of running GenAI models locally for increased privacy and lower latency.
Apple’s iMessage has been declared by the European Commission not to be a “core platform service” under the EU’s Digital Markets Act (DMA), exempting it from rigorous new rules such as interoperability requirements.
The decision came after a five-month investigation, and while services like WhatsApp and Messenger have been designated as core platform services requiring interoperability, iMessage, Bing, Edge, and Microsoft Advertising have not.
Despite avoiding the DMA’s interoperability obligations, Apple announced it would support the cross-platform RCS messaging standard on iPhones, which will function alongside iMessage without replacing it.
Google says it got rid of over 170 million fake reviews in Search and Maps in 2023
Google announced that it eliminated more than 170 million fake reviews in Google Search and Maps in 2023, a figure that surpasses by over 45 percent the number removed in the previous year.
The company introduced new algorithms to detect fake reviews, including identifying duplicate content across multiple businesses and sudden spikes of 5-star ratings, leading to the removal of five million fake reviews related to a scamming network.
Additionally, Google removed 14 million policy-violating videos and blocked over 2 million scam attempts to claim legitimate business profiles in 2023, doubling the figures from 2022.
“More agents = more performance”- The Tencent Research Team: The Tencent Research team suggests boosting language model performance by adding more agents. They use a “sampling-and-voting” method, where the input task is run multiple times through a language model with several agents to generate various results. These results are then subjected to majority voting to determine the most reliable result.
Google DeepMind’s MC-ViT enables long-context video understanding: Most transformer-based video encoders are limited to short contexts due to quadratic complexity. To overcome this issue, Google DeepMind introduces memory consolidated vision transformer (MC-ViT) that effortlessly extends its context far into the past and exhibits excellent scaling behavior when learning from longer videos.
ElevenLabs’ AI voice cloning lets you turn your voice into passive income: ElevenLabs has developed an AI-based voice cloning model to turn your voice into passive income. The voice cloning program allows all voice-over artists to create professional clones, share them with the Voice Library community, and earn rewards/royalty every time soundbite is used.
NVIDIA CEO Jensen Huang advocates for each country’s sovereign AI: While speaking at the World Governments Summit in Dubai, the NVIDIA CEO strongly advocated the need for sovereign AI. He said, “Every country needs to own the production of their own intelligence.” He further added, “It codifies your culture, your society’s intelligence, your common sense, your history – you own your own data.”
Google to invest €25 million in Europe to uplift AI skills: Google has pledged 25 million euros to help the people of Europe learn AI. Google is also looking to run “growth academies” to support companies using AI to scale their companies and has expanded its free online AI training courses to 18 languages.
NVIDIA surpasses Amazon in market value: NVIDIA Corp. briefly surpassed Amazon.com Inc. on Monday. Nvidia rose almost 0.2%, closing with a market value of about $1.78 trillion. While Amazon fell 1.2%, it ended with a closing valuation of $1.79 trillion. It made NVIDIA Corp. 4th largest US-listed company.
Microsoft might develop an AI upscaling feature for Windows 11: Microsoft may release an AI upscaling feature for PC gaming on Windows 11, similar to Nvidia’s DLSS technology. The “Automatic Super Resolution” feature uses AI to improve supported games’ frame rates and image detail.
Fandom rolls out controversial generative AI features: Fandom’s Quick Answers feature, part of its generative AI tools, has sparked controversy among wiki creators. It generates short Q&A-style responses, but many creators complain about inaccuracies, undermining user trust.
A Daily Chronicle of AI Innovations in February 2024 – Day 12: AI Daily News – February 12th, 2024
DeepSeekMath: The key to mathematical LLMs
In its latest research paper, DeepSeek AI has introduced a new AI model, DeepSeekMath 7B, specialized for improving mathematical reasoning in open-source LLMs. It has been pre-trained on a massive corpus of 120 billion tokens extracted from math-related web content, combined with reinforcement learning techniques tailored for math problems.
When evaluated across crucial English and Chinese benchmarks, DeepSeekMath 7B outperformed all the leading open-source mathematical reasoning models, even coming close to the performance of proprietary models like GPT-4 and Gemini Ultra.
Why does this matter?
Previously, state-of-the-art mathematical reasoning was locked within proprietary models that aren’t inaccessible to everyone. With DeepSeekMath 7B’s decision to go open-source (while also sharing the training methodology), new doors have opened for math AI development across fields like education, finance, scientific computing, and more. Teams can build on DeepSeekMath’s high-performance foundation instead of starting models from scratch.
localllm enables GenAI app development without GPUs
Google has introduced a new open-source tool called localllm that allows developers to run LLMs locally on CPUs within Cloud Workstations instead of relying on scarce GPU resources. localllm provides easy access to “quantized” LLMs from HuggingFace that have been optimized to run efficiently on devices with limited compute capacity.
By allowing LLMs to run on CPU and memory, localllm significantly enhances productivity and cost efficiency. Developers can now integrate powerful LLMs into their workflows without managing scarce GPU resources or relying on external services.
Why does this matter?
localllm democratizes access to the power of large language models by freeing developers from GPU constraints. Now, even solo innovators and small teams can experiment and create production-ready GenAI applications without huge investments in infrastructure costs.
In a concerning development, IBM researchers have shown how multiple GenAI services can be used to tamper and manipulate live phone calls. They demonstrated this by developing a proof-of-concept, a tool that acts as a man-in-the-middle to intercept a call between two speakers. They then experimented with the tool by audio jacking a live phone conversation.
The call audio was processed through a speech recognition engine to generate a text transcript. This transcript was then reviewed by a large language model that was pre-trained to modify any mentions of bank account numbers. Specifically, when the model detected a speaker state their bank account number, it would replace the actual number with a fake one.
Remarkably, whenever the AI model swapped in these phony account numbers, it even injected its own natural buffering phrases like “let me confirm that information” to account for the extra seconds needed to generate the devious fakes.
The altered text, now with fake account details, was fed into a text-to-speech engine that cloned the speakers’ voices. The manipulated voice was successfully inserted back into the audio call, and the two people had no idea their conversation had been changed!
Why does this matter?
This proof-of-concept highlights alarming implications – victims could become unwilling puppets as AI makes realistic conversation tampering dangerously easy. While promising, generative AI’s proliferation creates an urgent need to identify and mitigate emerging risks. Even if still theoretical, such threats warrant increased scrutiny around model transparency and integrity verification measures before irreparable societal harm occurs.
What Else Is Happening in AI on February 12th, 2024
Perplexity partners with Vercel to bring AI search to apps
By partnering with Vercel, Perplexity AI is making its large language models available to developers building apps on Vercel. Developers get access to Perplexity’s LLMs pplx-7b-online and pplx-70b-online that use up-to-date internet knowledge to power features like recommendations and chatbots. (Link)
Volkswagen sets up “AI Lab” to speed up its AI development initiatives
The lab will build AI prototypes for voice recognition, connected digital services, improved electric vehicle charging cycles, predictive maintenance, and other applications. The goal is to collaborate with tech firms and rapidly implement ideas across Volkswagen brands. (Link)
Tech giants use AI to monitor employee messages
AI startup Aware has attracted clients like Walmart, Starbucks, and Delta to use its technology to monitor workplace communications. But experts argue this AI surveillance could enable “thought crime” violations and treat staff “like inventory.” There are also issues around privacy, transparency, and recourse for employees. (Link)
Disney harnesses AI to bring contextual ads to streaming
Their new ad tool called “Magic Words” uses AI to analyze the mood and content of scenes in movies and shows. It then allows brands to target custom ads based on those descriptive tags. Six major ad agencies are beta-testing the product as Disney pushes further into streaming ads amid declining traditional TV revenue. (Link)
Microsoft hints at a more helpful Copilot in Windows 11
New Copilot experiences let the assistant offer relevant actions and understand the context better. Notepad is also getting Copilot integration for text explanations. The features hint at a forthcoming Windows 11 update centered on AI advancements. (Link)
Crowd destroys a driverless Waymo car
A Waymo driverless taxi was attacked in San Francisco’s Chinatown, resulting in its windshield being smashed, being covered in spray paint, its windows broken, and ultimately being set on fire.
No motive for the attack has been reported, and the Waymo car was not transporting any riders at the time of the incident; police confirmed there were no injuries.
The incident occurs amidst tensions between San Francisco residents and automated vehicle operators, following previous issues with robotaxis causing disruption and accidents in the city.
Apple has been buying AI startups faster than Google, Facebook, likely to shakeup global AI soon
Apple has reportedly outpaced major rivals like Google, Meta, and Microsoft in AI startup acquisitions in 2023, with up to 32 companies acquired, highlighting its dedication to AI development.
The company’s strategic acquisitions provide access to cutting-edge technology and top-talent, aiming to strengthen its competitive edge and AI capabilities in its product lineup.
While specifics of Apple’s integration plans for these AI technologies remain undisclosed, its aggressive acquisition strategy signals a significant focus on leading the global AI innovation forefront.
The antitrust fight against Big Tech is just beginning
DOJ’s Jonathan Kanter emphasizes the commencement of a significant antitrust battle against Big Tech, highlighting unprecedented public resonance with these issues.
The US government has recently blocked a notable number of mergers to protect competition, including stopping Penguin Random House from acquiring Simon & Schuster.
Kanter highlights the problem of monopsony in tech markets, where powerful buyers distort the market, and stresses the importance of antitrust enforcement for a competitive economy.
Nvidia CEO plays down fears in call for rapid AI infrastructure growth
Nvidia CEO Jensen Huang downplays fears of AI, attributing them to overhyped concerns and interests aimed at scaring people, while advocating for rapid development of AI infrastructure for economic benefits.
Huang argues that regulating AI should not be more difficult than past innovations like cars and planes, emphasizing the importance of countries building their own AI infrastructure to protect culture and gain economic advantages.
Despite Nvidia’s success with AI chips and the ongoing global debate on AI regulation, Huang encourages nations to proactively develop their AI capabilities, dismissing the scare tactics as a barrier to embracing the technology’s potential.
Gemini is an AI chatbot from Google AI that can be used for a variety of research tasks, including finding information, summarizing texts, and generating creative text formats. It can be used for both primary and secondary research and it is great for creating content.
Key features:
Accuracy: Gemini is trained on a massive dataset of text and code, which means that it can generate text that is accurate and reliable also it uses Google to look up answers.
Relevance: Gemini can be used to find information that is relevant to a specific research topic.
Creativity: Gemini can be used to generate creative text formats such as code, scripts, musical pieces, email, letters, etc.
Engagement: Gemini can be used to present information creatively and engagingly.
Accessibility: Gemini is available for free and can be used from anywhere in the world.
Scite AI is an innovative platform that helps discover and evaluate scientific articles. Its Smart Citations feature provides context and classification of citations in scientific literature, indicating whether they support or contrast the cited claims.
Key features:
Smart Citations: Offers detailed insights into how other papers have cited a publication, including the context and whether the citation supports or contradicts the claims made.
Deep Learning Model: Automatically classifies each citation’s context, indicating the confidence level of the classification.
Citation Statement Search: Enables searching across metadata relevant publications.
Custom Dashboards: Allows users to build and manage collections of articles, providing aggregate insights and notifications.
Reference Check: Helps to evaluate the quality of references used in manuscripts.
Journal Metrics: Offers insights into publications, top authors, and scite Index rankings.
Assistant by scite: An AI tool that utilizes Smart Citations for generating content and building reference lists.
GPT4All is an open-source ecosystem for training and deploying large language models that can be run locally on consumer-grade hardware. GPT4All is designed to be powerful, customizable and great for conducting research. Overall, it is an offline and secure AI-powered search engine.
Key information:
Answer questions about anything: You can use any ChatGPT version for your personal use to answer even simple questions.
Personal writing assistant: Write emails, documents, stories, songs, play based on your previous work.
Reading documents: Submit your text documents and receive summaries and answers. You can easily find answers in the documents you provide by submitting a folder of documents for GPT4All to extract information from.
AsReview is a software package designed to make systematic reviews more efficient using active learning techniques. It helps to review large amounts of text quickly and addresses the challenge of time constraints when reading large amounts of literature.
Key features:
Free and Open Source: The software is available for free and its source code is openly accessible.
Local or Server Installation: It can be installed either locally on a device or on a server, providing full control over data.
Active Learning Algorithms: Users can select from various active learning algorithms for their projects.
Project Management: Enables creation of multiple projects, selection of datasets, and incorporation of prior knowledge.
Research Infrastructure: Provides an open-source infrastructure for large-scale simulation studies and algorithm validation.
Extensible: Users can contribute to its development through GitHub.
DeepL translates texts & full document files instantly. Millions translate with DeepL everyday. It is commonly used for translating web pages, documents, and emails. It can also translate speech.
DeepL also has a great feature called DeepL Write. DeepL Write is a powerful tool that can help you to improve your writing in a variety of ways. It is a valuable resource for anyone who wants to write clear, concise, and effective prose.
Key features:
Tailored Translations: Adjust translations to fit specific needs and context, with alternatives for words or phrases.
Whole Document Translation: One-click translation of entire documents including PDF, Word, and PowerPoint files while maintaining original formatting.
Tone Adjustment: Option to select between formal and informal tone of voice for translations in selected languages.
Built-in Dictionary: Instant access to dictionary for insight into specific words in translations, including context, examples, and synonyms.
Humata is an AI tool designed to assist with processing and understanding PDF documents. It offers features like summarizing, comparing documents, and answering questions based on the content of the uploaded files.
Key information:
Designed to process and summarize long documents, allowing users to ask questions and get summarized answers from any PDF file.
Claims to be faster and more efficient than manual reading, capable of answering repeated questions and customizing summaries.
Humata differs from ChatGPT by its ability to read and interpret files, generating answers with citations from the documents.
Cockatoo AI is an AI-powered transcription service that automatically generates text from recorded speech. It is a convenient and easy-to-use tool that can be used to transcribe a variety of audio and video files. It is one of the AI-powered tools that not everyone will find a use for but it is a great tool nonetheless.
Key features:
Highly accurate transcription: Cockatoo AI uses cutting-edge AI to transcribe audio and video files with a high degree of accuracy. It is said to be able to transcribe speech with superhuman accuracy, surpassing human performance.
Support for multiple languages: Cockatoo AI supports transcription in more than 90 languages, making it a versatile tool for global users.
Versatile file formats: Cockatoo AI can transcribe a variety of audio and video file formats, including MP3, WAV, MP4, and MOV.
Quick turnaround: Cockatoo AI can transcribe audio and video files quickly, with one hour of audio typically being transcribed in just 2-3 minutes.
Seamless export options: Cockatoo AI allows users to export their transcripts in a variety of formats, including SRT, DOCX, any PDF document, and TXT.
Avidnote is an AI-powered research writing platform that helps researchers write and organize their research notes easily. It combines all of the different parts of the academic writing process, from finding articles to managing references and annotating research notes.
Key Features:
AI research paper summary: Avidnote can automatically summarize research papers in a few clicks. This can save researchers a lot of time and effort, as they no longer need to read the entire paper to get the main points.
Integrated note-taking: Avidnote allows researchers to take notes directly on the research papers they are reading. This makes it easy to keep track of their thoughts and ideas as they are reading.
Collaborative research: Avidnote can be used by multiple researchers to collaborate on the same project. This can help share ideas, feedback, and research notes.
AI citation generation: Avidnote can automatically generate citations for research papers in APA, MLA, and Chicago styles. This can save researchers a lot of time and effort, as they no longer need to manually format citations.
AI writing assistant: Avidnote can provide suggestions for improving the writing style of research papers. This can help researchers to write more clear, concise, and persuasive papers.
AI plagiarism detection: Avidnote can detect plagiarism in research papers. This can help researchers to avoid plagiarism and maintain the integrity of their work.
Research Rabbit is an online tool that helps you find references quickly and easily. It is a citation-based literature mapping tool that can be used to plan your essay, minor project, or literature review.
Key features:
AI for Researchers: Enhances research writing, reading, and data analysis using AI.
Effective Reading: Capabilities include summarizing, proofreading text, and identifying research gaps.
Data Analysis: Offers tools to input data and discover correlations and insights, relevant articles.
Research Methods Support: Includes transcribing interviews and other research methods.
AI Functionalities: Enables users to upload papers, ask questions, summarize text, get explanations, and proofread using AI.
Note Saving: Provides an integrated platform to save notes alongside papers.
A Daily Chronicle of AI Innovations in February 2024 – Day 11: AI Daily News – February 11th, 2024
This week, we’ll cover Google DeepMind creating a grandmaster-level chess AI, the satirical AI Goody-2 raising questions about ethics and AI boundaries, Google rebranding Bard to Gemini and launching the Gemini Advanced chatbot and mobile apps, OpenAI developing AI agents to automate work, and various companies introducing new AI-related products and features.
Google DeepMind has just made an incredible breakthrough in the world of chess. They’ve developed a brand new artificial intelligence (AI) that can play chess at a grandmaster level. And get this—it’s not like any other chess AI we’ve seen before!
Instead of using traditional search algorithm approaches, Google DeepMind’s chess AI is based on a language model architecture. This innovative approach diverges from the norm and opens up new possibilities in the realm of AI.
To train this AI, DeepMind fed it a massive dataset of 10 million chess games and a mind-boggling 15 billion data points. And the results are mind-blowing. The AI achieved an Elo rating of 2895 in rapid chess when pitted against human opponents. That’s seriously impressive!
In fact, this AI even outperformed AlphaZero, another notable chess AI, when it didn’t use the MCTS strategy. That’s truly remarkable.
But here’s the real kicker: this breakthrough isn’t just about chess. It highlights the incredible potential of the Transformer architecture, which was primarily known for its use in language models. It challenges the idea that transformers can only be used as statistical pattern recognizers. So, we might just be scratching the surface of what these transformers can do!
Overall, this groundbreaking achievement by Google DeepMind opens up exciting opportunities for the future of AI, not just in chess but in various domains as well.
So, have you heard about this AI called Goody-2? It’s actually quite a fascinating creation by the art studio Brain. But here’s the thing – Goody-2 takes the concept of ethical AI to a whole new level. I mean, it absolutely refuses to engage in any conversation, no matter the topic. Talk about being too ethical for its own good!
The idea behind Goody-2 is to highlight the extremes of ethical AI development. It’s a satirical take on the overly cautious approach some AI developers take when it comes to potential risks and offensive content. In the eyes of Goody-2, every single query, no matter how innocent or harmless, is seen as potentially offensive or dangerous. It’s like the AI is constantly on high alert, unwilling to take any risks.
But let’s not dismiss the underlying questions Goody-2 raises. It really makes you think about the effectiveness of AI and the necessity of setting boundaries. By deliberately prioritizing ethical considerations over practical utility, its creators are making a statement about responsibility in AI development. How much caution is too much? Where do we draw the line between being responsible and being overly cautious?
Goody-2 may be a satirical creation, but it’s provoking some thought-provoking discussions about the role of AI in our lives and the balance between responsibility and usefulness.
Did you hear the news? Google has made some changes to their chatbot lineup! Say goodbye to Google Bard and say hello to Gemini Advanced! It seems like Google has rebranded their chatbot and given it a new name. Exciting stuff, right?
But that’s not all. Google has also launched the Gemini Advanced chatbot, which features their incredible Ultra 1.0 AI model. This means that the chatbot is smarter and more advanced than ever before. Imagine having a chatbot that can understand and respond to your commands with a high level of accuracy. Pretty cool, right?
And it’s not just limited to desktop anymore. Gemini is also moving into the mobile world, specifically Android and iOS phones. You can now have this pocket-sized chatbot ready to assist you whenever and wherever you are. Whether you need some creative inspiration, want to navigate through voice commands, or even scan something with your camera, Gemini has got you covered.
The rollout has already started in the US and some Asian countries, but don’t worry if you’re not in those regions. Google plans to expand Gemini’s availability worldwide gradually. So, keep an eye out for it because this chatbot is going places!
So, get this: OpenAI is seriously stepping up the game when it comes to AI. They’re developing these incredible AI “agents” that can basically take over your device and do all sorts of tasks for you. I mean, we’re talking about automating complex workflows between applications here. No more wasting time with manual cursor movements, clicks, and typing between apps. It’s like having a personal assistant right in your computer.
But wait, there’s more! These agents don’t just handle basic stuff. They can also deal with web-based tasks like booking flights or creating itineraries, and here’s the kicker: they don’t even need access to APIs. That’s some serious next-level tech right there.
Sure, OpenAI’s ChatGPT can already do some pretty nifty stuff using APIs, but these AI agents are taking things to a whole new level. They’ll be able to handle unstructured, complex work with little explicit guidance. So basically, they’re smart, adaptable, and can handle all sorts of tasks without breaking a sweat.
I don’t know about you, but I’m excited to see what these AI agents can do. It’s like having a super-efficient, ultra-intelligent buddy right in your computer, ready to take on the world of work.
Brilliant Labs just made an exciting announcement in the world of augmented reality (AR) glasses. While Apple may have been grabbing the spotlight with its Vision Pro, Brilliant Labs unveiled its own smart glasses called “Frame” that come with a multi-modal voice/vision/text AI assistant named Noa. These lightweight glasses are powered by advanced models like GPT-4 and Stable Diffusion, and what sets them apart is their open-source design, allowing programmers to build and customize on top of the AI capabilities.
But that’s not all. Noa, the AI assistant on the Frame, will also leverage Perplexity’s cutting-edge technology to provide rapid answers using its real-time chatbot. So, whether you’re interacting with the glasses through voice commands, visual cues, or text input, Noa will have you covered with quick and accurate responses.
Now, let’s shift our attention to Google. The tech giant’s research division recently introduced an impressive development called MobileDiffusion. This innovation allows Android and iPhone users to generate high-resolution images, measuring 512*512 pixels, in less than a second. What makes it even more remarkable is that MobileDiffusion boasts a comparably small model size of just 520M parameters, making it ideal for mobile devices. With its rapid image generation capabilities, this technology takes user experience to the next level, even allowing users to generate images in real-time while typing text prompts.
Furthermore, Google has launched its largest and most capable AI model, Ultra 1.0, in its ChatGPT-like assistant, which has been rebranded as Gemini (formerly Bard). This advanced AI model is now available as a premium plan called Gemini Advanced, accessible in 150 countries for a subscription fee of $19.99 per month. Users can enjoy a two-month trial at no cost. To enhance accessibility, Google has also rolled out Android and iOS apps for Gemini, making it convenient for users to harness its power across different devices.
Alibaba Group has also made strides in the field of AI, specifically with their Qwen1.5 series. This release includes models of various sizes, from 0.5B to 72B, offering flexibility for different use cases. Remarkably, Qwen1.5-72B has outperformed Llama2-70B in all benchmarks, showcasing its superior performance. These models are available on Ollama and LMStudio platforms, and an API is also provided on together.ai, allowing developers to leverage the capabilities of Qwen1.5 series models in their own applications.
NVIDIA, a prominent player in the AI space, has introduced Canary 1B, a multilingual model designed for speech-to-text recognition and translation. This powerful model supports transcription and translation in English, Spanish, German, and French. With its superior performance, Canary surpasses similarly-sized models like Whisper-large-v3 and SeamlessM4T-Medium-v1 in both transcription and translation tasks, securing the top spot on the HuggingFace Open ASR leaderboard. It achieves an impressive average word error rate of 6.67%, outperforming all other open-source models.
Excitingly, researchers have released Lag-Llama, the first open-source foundation model for time series forecasting. With this model, users can make accurate predictions for various time-dependent data. This is a significant development that has the potential to revolutionize industries reliant on accurate forecasting, such as finance and logistics.
Another noteworthy release in the AI assistant space comes from LAION. They have introduced BUD-E, an open-source conversational and empathic AI Voice Assistant. BUD-E stands out for its ability to use natural voices, empathy, and emotional intelligence to handle multi-speaker conversations. With this empathic approach, BUD-E offers a more human-like and personalized interaction experience.
MetaVoice has contributed to the advancements in text-to-speech (TTS) technology with the release of MetaVoice-1B. Trained on an extensive dataset of 100K hours of speech, this 1.2B parameter base model supports emotional speech in English and voice cloning. By making MetaVoice-1B available under the Apache 2.0 license, developers can utilize its capabilities in various applications that require TTS functionality.
Bria AI is addressing the need for background removal in images with its RMBG v1.4 release. This open-source model, trained on fully licensed images, provides a solution for easily separating subjects from their backgrounds. With RMBG, users can effortlessly create visually appealing compositions by removing unwanted elements from their images.
Researchers have also introduced InteractiveVideo, a user-centric framework for video generation. This framework is designed to enable dynamic interaction between users and generative models during the video generation process. By allowing users to instruct the model in real-time, InteractiveVideo empowers individuals to shape the generated content according to their preferences and creative vision.
Microsoft has been making strides in improving its AI search and chatbot experience with the redesigned Copilot AI. This enhanced version, previously known as Bing Chat, offers a new look and comes equipped with built-in AI image creation and editing functionality. Additionally, Microsoft introduces Deucalion, a finely tuned model that enriches Copilot’s Balanced mode, making it more efficient and versatile for users.
Online gaming platform Roblox has integrated AI-powered real-time chat translations, supporting communication in 16 different languages. This feature enables users from diverse linguistic backgrounds to interact seamlessly within the Roblox community, fostering a more inclusive and connected platform.
Hugging Face has expanded its offerings with the new Assistants feature on HuggingChat. These custom chatbots, built using open-source language models (LLMs) like Mistral and Llama, empower developers to create personalized conversational experiences. Similar to OpenAI’s popular GPTs, Assistants enable users to access free and customizable chatbot capabilities.
DeepSeek AI introduces DeepSeekMath 7B, an open-source model designed to approach the mathematical reasoning capability of GPT-4. With a massive parameter count of 7B, this model opens up avenues for more advanced mathematical problem-solving and computational tasks. DeepSeekMath-Base, initialized with DeepSeek-Coder-Base-v1.5 7B, provides a strong foundation for mathematical AI applications.
Moving forward, Microsoft is collaborating with news organizations to adopt generative AI, bringing the benefits of AI technology to the journalism industry. With these collaborations, news organizations can leverage generative models to enhance their storytelling and reporting capabilities, contributing to more engaging and insightful content.
In an exciting partnership, LG Electronics has joined forces with Korean generative AI startup Upstage to develop small language models (SLMs). These models will power LG’s on-device AI features and AI services on their range of notebooks. By integrating SLMs into their devices, LG aims to enhance user experiences by offering more advanced and personalized AI functionalities.
Stability AI has unveiled the updated SVD 1.1 model, optimized for generating short AI videos with improved motion and consistency. This enhancement brings a smoother and more realistic experience to video generation, opening up new possibilities for content creators and video enthusiasts.
Lastly, both OpenAI and Meta have made an important commitment to label AI-generated images. This step ensures transparency and ethics in the usage of AI models for generating images, promoting responsible AI development and deployment.
Now, let’s address a privacy concern related to Google’s Gemini assistant. By default, Google saves your conversations with Gemini for years. While this may raise concerns about data retention, it’s important to note that Google provides users with control over their data through privacy settings. Users can adjust these settings to align with their preferences and manage the data saved by Gemini.
That wraps up the latest updates in AI technology and advancements. From the exciting progress in AR glasses to the development of powerful AI models and tools, these innovations are shaping the future of AI and paving the way for even more exciting possibilities.
In this episode, we covered Google DeepMind’s groundbreaking chess AI, the satirical AI Goody-2 raising ethical questions, Google’s rebranding of Bard to Gemini and launching the Gemini Advanced chatbot, OpenAI’s work on automating complex workflows, and the exciting new AI-related products and features introduced by various companies including Brilliant Labs, Google, Alibaba, NVIDIA, and more. Thank you for joining us on AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence, where we’ve delved into groundbreaking research, innovative applications, and emerging technologies that are pushing the boundaries of AI, keeping you updated on the latest ChatGPT and Google Bard trends. Stay tuned and subscribe for more!
Google DeepMind develops grandmaster-level chess AI
Google DeepMind has developed a new AI capable of playing chess at a grandmaster level using a language model-based architecture, diverging from traditional search algorithm approaches.
The chess AI, trained on a dataset of 10 million games and 15 billion data points, achieved an Elo rating of 2895 in rapid chess against human opponents, surpassing AlphaZero when not employing the MCTS strategy.
This breakthrough demonstrates the broader potential of Transformer architecture beyond language models, challenging the notion of transformers as merely statistical pattern recognizers.
Meet Goody-2, the AI too ethical to discuss literally anything
Goody-2 is a satirical AI created by the art studio Brain, designed to highlight the extremes of ethical AI by refusing to engage in any conversation due to viewing all queries as potentially offensive or dangerous.
The AI serves as a critique of overly cautious AI development practices and the balance between responsibility and usefulness, emphasizing responsibility to an absurd level.
Despite its satire, Goody-2 raises questions about the effectiveness of AI and the necessity of setting boundaries, as seen in its creators’ deliberate decision to prioritize ethical considerations over practical utility.
Reddit beats film industry again, won’t have to reveal pirates’ IP addresses
Movie companies’ third attempt to force Reddit to reveal IP addresses of users discussing piracy was rejected by the US District Court for the Northern District of California.
US Magistrate Judge Thomas Hixson ruled that providing IP addresses is subject to First Amendment scrutiny, protecting potential witnesses’ right to anonymity.
The court upheld Reddit’s right to protect its users’ First Amendment rights, noting that the information sought by movie companies could be obtained from other sources.
Amazon steers consumers to higher-priced items, lawsuit claims
Amazon faces a lawsuit filed by two customers accusing the company of inflating prices through its Buy Box algorithm, misleading shoppers into paying more.
The lawsuit claims Amazon gives preference to its own products or those from sellers in its Fulfillment By Amazon (FBA) program, often hiding cheaper options from other sellers.
Jeffrey Taylor and Robert Selway, who brought the lawsuit, argue this practice violates Washington’s Consumer Protection Act by deceiving consumers and stifling fair competition.
Instagram and Threads will stop recommending political content
Amazon faces a lawsuit filed by two customers accusing the company of inflating prices through its Buy Box algorithm, misleading shoppers into paying more.
The lawsuit claims Amazon gives preference to its own products or those from sellers in its Fulfillment By Amazon (FBA) program, often hiding cheaper options from other sellers.
Jeffrey Taylor and Robert Selway, who brought the lawsuit, argue this practice violates Washington’s Consumer Protection Act by deceiving consumers and stifling fair competition.
This week in AI – all the Major AI developments in a nutshell
Google launches Ultra 1.0, its largest and most capable AI model, in its ChatGPT-like assistant which has now been rebranded as Gemini (earlier called Bard). Gemini Advanced is available, in 150 countries, as a premium plan for $19.99/month, starting with a two-month trial at no cost. Google is also rolling out Android and iOS apps for Gemini [Details].
Alibaba Group released Qwen1.5 series, open-sourcing models of 6 sizes: 0.5B, 1.8B, 4B, 7B, 14B, and 72B. Qwen1.5-72B outperforms Llama2-70B across all benchmarks. The Qwen1.5 series is available on Ollama and LMStudio. Additionally, API on together.ai [Details|Hugging Face].
NVIDIA released Canary 1B, a multilingual model for speech-to-text recognition and translation. Canary transcribes speech in English, Spanish, German, and French and also generates text with punctuation and capitalization. It supports bi-directional translation, between English and three other supported languages. Canary outperforms similarly-sized Whisper-large-v3, and SeamlessM4T-Medium-v1 on both transcription and translation tasks and achieves the first place on HuggingFace Open ASR leaderboard with an average word error rate of 6.67%, outperforming all other open source models [Details].
Researchers released Lag-Llama, the first open-source foundation model for time series forecasting [Details].
LAION released BUD-E, an open-source conversational and empathic AI Voice Assistant that uses natural voices, empathy & emotional intelligence and can handle multi-speaker conversations [Details].
MetaVoice released MetaVoice-1B, a 1.2B parameter base model trained on 100K hours of speech, for TTS (text-to-speech). It supports emotional speech in English and voice cloning. MetaVoice-1B has been released under the Apache 2.0 license [Details].
Bria AI released RMBG v1.4, an an open-source background removal model trained on fully licensed images [Details].
Researchers introduce InteractiveVideo, a user-centric framework for video generation that is designed for dynamic interaction, allowing users to instruct the generative model during the generation process [Details|GitHub].
Microsoft announced a redesigned look for its Copilot AI search and chatbot experience on the web (formerly known as Bing Chat), new built-in AI image creation and editing functionality, and Deucalion, a fine tuned model that makes Balanced mode for Copilot richer and faster [Details].
Roblox introduced AI-powered real-time chat translations in 16 languages [Details].
Hugging Face launched Assistants feature on HuggingChat. Assistants are custom chatbots similar to OpenAI’s GPTs that can be built for free using open source LLMs like Mistral, Llama and others [Link].
DeepSeek AI released DeepSeekMath 7B model, a 7B open-source model that approaches the mathematical reasoning capability of GPT-4. DeepSeekMath-Base is initialized with DeepSeek-Coder-Base-v1.5 7B [Details].
Microsoft is launching several collaborations with news organizations to adopt generative AI [Details].
LG Electronics signed a partnership with Korean generative AI startup Upstage to develop small language models (SLMs) for LG’s on-device AI features and AI services on LG notebooks [Details].
Stability AI released SVD 1.1, an updated model of Stable Video Diffusion model, optimized to generate short AI videos with better motion and more consistency [Details|Hugging Face] .
OpenAI and Meta announced to label AI generated images [Details].
Google saves your conversations with Gemini for years by default [Details].
Google Bard Is Dead, Gemini Advanced Is In!
Google Bard is now Gemini
Google has rebranded its Bard conversational AI to Gemini with a new sidekick: Gemini Advanced!
This advanced chatbot is powered by Google’s largest “Ultra 1.0” language model, which testing shows is the most preferred chatbot compared to competitors. It can walk you through a DIY car repair or brainstorm your next viral TikTok.
Google launches Gemini Advanced
Google launched the Gemini Advanced chatbot with its Ultra 1.0 AI model. The Advanced version can walk you through a DIY car repair or brainstorm your next viral TikTok.
Google rollouts Gemini mobile apps
Gemini’s also moving into Android and iOS phones as pocket pals ready to share creative fire 24/7 via voice commands, screen overlays, or camera scans. The ‘droid rollout has started for the US and some Asian countries. The rest of us will just be staring at our phones and waiting for an invite from Google.
P.S. It will gradually expand globally.
Why does this matter?
With the Gemini Advanced, Google took the LLM race to the next level, challenging its competitor, GPT-4, with its specialized architecture optimized for search queries and natural language understanding. Who will win the race is a matter of time.
OpenAI is developing AI “agents” that can autonomously take over a user’s device and execute multi-step workflows.
One type of agent takes over a user’s device and automates complex workflows between applications, like transferring data from a document to a spreadsheet for analysis. This removes the need for manual cursor movements, clicks, and typing between apps.
Another agent handles web-based tasks like booking flights or creating itineraries without needing access to APIs.
While OpenAI’s ChatGPT can already do some agent-like tasks using APIs, these AI agents will be able to do more unstructured, complex work with little explicit guidance.
Why does this matter?
Having AI agents that can independently carry out tasks like booking travel could greatly simplify digital life for many end users. Rather than manually navigating across apps and websites, users can plan an entire vacation through a conversational assistant or have household devices automatically troubleshoot problems without any user effort.
Brilliant Labs Announces Multimodal AI Glasses, With Perplexity’s AI
Brilliant Labs announces Frames
While Apple hogged the spotlight with its chunky new Vision Pro, a Singapore startup, Brilliant Labs, quietly showed off its AR glasses packed with a multi-modal voice/vision/text AI assistant named Noa. https://youtu.be/xiR-XojPVLk?si=W6Q31vl1wNfqnNXj
These lightweight smart glasses, dubbed “Frame,” are powered by models like GPT-4 and Stable Diffusion, allowing hands-free price comparisons or visual overlays to project information before your eyes using voice commands. No fiddling with another device is needed.
The best part is- programmers can build on these AI glasses thanks to their open-source design.
Perplexity to integrate AI Chatbot into the Frames
In addition to enhancing the daily activities and interactions with the digital and physical world, Noa would also provide rapid answers using Perplexity’s real-time chatbot so Frame responses stay sharp.
Unlike AR Apple Vision Pro and Meta’s glasses that immerses users in augmented reality for interactive experiences, Frame AR glasses focuses on improving daily interactions and tasks like comparing product prices while shopping, translating foreign text seen while traveling abroad, or creating shareable media on the go.
It also enhances accessibility for users with limited dexterity or vision.
What Else Is Happening in AI in February 09th, 2024
Instagram tests AI writers for messages
Instagram is likely to bring the option ‘Write with AI’, which will probably paraphrase the texts in different styles to enhance creativity in conversations, similar to Google’s Magic Compose. (Link)
Stability AI releases Stable Audio AudioSparx 1.0 music model
Stability AI launches AudioSparx 1.0, a groundbreaking generative model for music and audio. It produces professional-grade stereo music from simple text prompts in seconds, with a coherent structure. (Link)
Midjourney opens alpha-testing of its website
Midjourney grants early web access to AI art creators with over 1000 images, transitioning from Discord dependence. The alpha testing signals that Midjourney moving beyond its chat app origin towards web and mobile apps, gradually maturing as a multi-platform AI art creation service. (Link)
Altman seeks trillions to revolutionize AI chip capacity
OpenAI CEO Sam Altman pursues multi-trillion dollar investments, including from the UAE government, to build specialized GPUs and chips for powering AI systems. If funded, this initiative would accelerate OpenAI’s ML to new heights. (Link)
FCC bans deceptive AI voice robocalls
The FCC prohibits robocalls using AI to clone voices, declaring them “artificial” per existing law. The ruling aims to deter deception and confirm consumers are protected from exploitative automated calls mimicking trusted people. Violators face penalties as authorities crack down on illegal practices enabled by advancing voice synthesis tech. (Link)
Sam Altman seeks $7 trillion for new AI chip project
Sam Altman, CEO of OpenAI, is aiming to raise trillions of dollars from investors, including the UAE government, to revolutionize the semiconductor industry and overcome chip shortages critical for AI development.
Altman’s project seeks to expand global chip manufacturing capacity and enhance AI capabilities, requiring an investment of $5 trillion to $7 trillion, which would significantly exceed the current semiconductor industry size.
Sam Altman’s vision includes forming partnerships with OpenAI, investors, chip manufacturers, and energy suppliers to create chip foundries, requiring extensive funding that might involve debt financing.
FCC declares AI-voiced robocalls illegal
The FCC has made it illegal for robocalls to use AI-generated voices, allowing state attorneys general to take legal action against such practices.
AI-generated voices are now classified as “an artificial or prerecorded voice” under the Telephone Consumer Protection Act (TCPA), restricting their use for non-emergency purposes without prior consent.
The FCC’s ruling aims to combat scams and misinformation spread through AI-generated voice robocalls, providing state attorneys general with enhanced tools for enforcement.
Ex-Apple engineer sentenced to prison for stealing Apple Car trade secrets
Xiaolang Zhang, a former Apple engineer, was sentenced to 120 days in prison and three years supervised release for stealing self-driving car technology.
Zhang transferred sensitive documents and hardware related to Apple’s self-driving vehicle project to his wife’s laptop before planning to leave for a job in China.
In addition to his prison sentence, Zhang must pay restitution of $146,984, having originally faced up to 10 years in prison and a $250,000 fine.
Leading AI companies join new US safety consortium
The U.S. AI Safety Institute Consortium (AISIC) was announced by the Biden Administration as a response to an executive order, including significant AI entities like Amazon, Google, Apple, Microsoft, OpenAI, and NVIDIA among over 200 representatives.
The consortium aims to set safety standards and protect the U.S. innovation ecosystem, focusing on the development of safe and trustworthy AI through collaboration with various sectors, including healthcare and academia.
Notably absent from the consortium are major tech companies Tesla, Oracle, and Broadcom.
Midjourney might ban Biden and Trump images this election season
Midjourney, led by CEO David Holz, is reportedly considering banning images of political figures like Biden and Trump during the upcoming election season to prevent the spread of misinformation.
The company previously ended free trials for its AI image generator after AI-generated deepfakes, including ones of Trump getting arrested and the pope in a fashionable coat, went viral.
Despite implementing rules against misleading creations, Bloomberg was still able to generate altered images of Trump.
Scientists in UK set fusion record
A 40-year-old UK fusion reactor set a new world record for energy output, generating 69 megajoules of fusion energy for five seconds before its closure, advancing the pursuit of clean, limitless energy.
The achievement by the Joint European Torus (JET) enhances confidence in future fusion projects like ITER, which is under construction in France, despite JET’s operation concluding in December 2023.
The decision to shut down JET reflects complex dynamics, including Brexit-driven shifts in the UK’s fusion energy strategy, despite the experiment’s substantial contributions to fusion research.
A Daily Chronicle of AI Innovations in February 2024 – Day 08: AI Daily News – February 08th, 2024
Google rebrands Bard AI to Gemini and launches a new app and subscription
Google on Thursday announced a major rebrand of Bard, its artificial intelligence chatbot and assistant, including a fresh app and subscription options. Bard, a chief competitor to OpenAI’s ChatGPT, is now called Gemini, the same name as the suite of AI models that power the chatbot.
Google also announced new ways for consumers to access the AI tool: As of Thursday, Android users can download a new dedicated Android app for Gemini, and iPhone users can use Gemini within the Google app on iOS.
Google’s rebrand and app offerings underline the company’s commitment to pursuing — and investing heavily in — AI assistants or agents, a term often used to describe tools ranging from chatbots to coding assistants and other productivity tools.
Alphabet CEO Sundar Pichai highlighted the firm’s commitment to AI during the company’s Jan. 30 earnings call. Pichai said he eventually wants to offer an AI agent that can complete more and more tasks on a user’s behalf, including within Google Search, although he said there is “a lot of execution ahead.” Likewise, chief executives at tech giants from Microsoft to Amazon underlined their commitment to building AI agents as productivity tools.
Google’s Gemini changes are a first step to “building a true AI assistant,” Sissie Hsiao, a vice president at Google and general manager for Google Assistant and Bard, told reporters on a call Wednesday.
Google on Thursday also announced a new AI subscription option, for power users who want access to Gemini Ultra 1.0, Google’s most powerful AI model. Access costs $19.99 per month through Google One, the company’s paid storage offering. For existing Google One subscribers, that price includes the storage plans they may already be paying for. There’s also a two-month free trial available.
Thursday’s rollouts are available to users in more than 150 countries and territories, but they’re restricted to the English language for now. Google plans to expand language offerings to include Japanese and Korean soon, as well as other languages.
The Bard rebrand also affects Duet AI, Google’s former name for the “packaged AI agents” within Google Workspace and Google Cloud, which are designed to boost productivity and complete simple tasks for client companies including Wayfair, GE, Spotify and Pfizer. The tools will now be known as Gemini for Workspace and Gemini for Google Cloud.
Google One subscribers who pay for the AI subscription will also have access to Gemini’s assistant capabilities in Gmail, Docs, Sheets, Slides and Meet, executives told reporters Wednesday. Google hopes to incorporate more context into Gemini from users’ content in Gmail, Docs and Drive. For example, if you were responding to a long email thread, suggested responses would eventually take in context from both earlier messages in the thread and potentially relevant files in Google Drive.
As for the reason for the broad name change? Google’s Hsiao told reporters Wednesday that it’s about helping users understand that they’re interacting directly with the AI models that underpin the chatbot.
“Bard [was] the way to talk to our cutting-edge models, and Gemini is our cutting-edge models,” Hsiao said.
Eventually, AI agents could potentially schedule a group hangout by scanning everyone’s calendar to make sure there are no conflicts, book travel and activities, buy presents for loved ones or perform a specific job function such as outbound sales. Currently, though, the tools, including Gemini, are largely limited to tasks such as summarizing, generating to-do lists or helping to write code.
“We will again use generative AI there, particularly with our most advanced models and Bard,” Pichai said on the Jan. 30 earnings call, speaking about Google Assistant and Search. That “allows us to act more like an agent over time, if I were to think about the future and maybe go beyond answers and follow-through for users even more.”
In their latest blogs and Super Bowl commercial, Microsoft announced their intention to showcase the capabilities of Copilot exactly one year after their entry into the AI space with Bing Chat. They have announced updates to their Android and iOS applications to make the user interface more sleek and user-friendly, along with a carousel for follow-up prompts.
Microsoft also introduced new features to Designer in Copilot to take image generation a step further with the option to edit generated images using follow-up prompts. The customizations can be anything from highlighting the image subject to enhancing colors and modifying the background. For Copilot Pro users, additional features such as resizing the images and changing the aspect ratio are also available.
Why does this matter?
Copilot unifies the AI experience for users on all major platforms by enhancing the experience on mobile platforms and combining text and image generative abilities. Adding additional features to the image generation model greatly enhances the usability and accuracy of the final output for users.
Deepmind presents ‘self-discover’ framework for LLMs improvement
Google Deepmind, with the University of Southern California, has proposed a ‘self-discover’ prompting framework to enhance the performance of LLMs. Models such as GPT-4 and Google’s Palm 2 have witnessed a performance improvement on challenging reasoning benchmarks by 32% compared to the Chain of Thought (CoT) framework.
The framework works by identifying the reasoning technique intrinsic to the task and then proceeds to solve the task with the discovered technique ideal for the task. This framework also works with 10 to 40 times less inference computation, which means that the output will be generated faster using the same computational resources.
Why does this matter?
Improving the reasoning accuracy of an LLM is largely beneficial to users as they can achieve the desired output with fewer prompts and with greater accuracy. Moreover, reducing the inference directly translates to lower computational resource consumption, leading to lower operating costs for enterprises.
YouTube reveals plans to use AI tools to empower human creativity
YouTube CEO Neal Mohan revealed 4 new bets they have placed for 2024, with the first bet being on AI tools to empower human creativity on the platform. These AI tools include:
Dream Screen, which lets content creators generate custom backgrounds through AI with simple prompts of an idea.
Dream Track will allow content creators to generate custom music by just typing in the music theme and the artist they want to feature.
These new tools are mainly aimed to be used in YouTube Shorts and highlight a priority to move towards short-form content.
Why does this matter?
The democratization of AI tools for content creators allows them to offer better quality content to their viewers, which collectively boosts the quality of engagement on the platform. This also lowers the bar to entry for many aspiring artists and lets them create quality content without the added difficulty of generating custom video assets.
What else is happening in AI on February 08th 2024
OpenAI forms a new team for child safety research.
OpenAI revealed the existence of a child safety team through their careers page, where they had open positions for a child safety enforcement specialist. The team will study and review AI-generated content for “sensitive content” to ensure that the generated content aligns with their platform policy. This is to prevent the misuse of OpenAI’s AI tools by underage users. (Link)
Elon Musk to financially support efforts to use AI to decipher Roman scrolls.
Elon Musk shared on X that the Musk Foundation will fund the effort to decipher the scrolls charred by the volcanic eruption of Mt.Vesuvius. The project run by Nat Freidman (former CEO of GitHub) states that the next stage of the effort will cost approximately $2 million, after which they should be able to read entire scrolls. The total cost to decipher all the discovered scrolls is estimated to be around $10 million. (Link)
Microsoft’s Satya Nadella urges India to capitalize on the opportunity of AI.
The CEO of Microsoft, Satya Nadella, at the Taj Mahal Hotel in Mumbai, expressed how India has an unprecedented opportunity to capitalize on the AI wave owing to the 5 million+ programmers in the country. He also stated that Microsoft will help train over 2 million employees in India with the skills required for AI development. (Link)
OpenAI introduces the creation of endpoint-specific API keys for better security.
The OpenAI Developers account on X announced their latest feature for developers to create endpoint-specific API keys. These special API keys allow for granular access and better security as they will only let specific registered endpoints access the API. (Link)
Ikea introduces a new ChatGPT-powered AI assistant for interior design.
On the OpenAI GPT store, Ikea launched its AI assistant, which helps users envision and draw inspiration to design their interior spaces using Ikea products. The AI assistant helps users input specific dimensions, budgets, preferences, and requirements for personalized furniture recommendations through a familiar ChatGPT-style window. (Link)
OpenAI is developing two AI agents to automate entire work processes
OpenAI is developing two AI agents aimed at automating complex tasks; one is device-specific for tasks like data transfer and filling out forms, while the other focuses on web-based tasks such as data collection and booking tickets.
The company aims to evolve ChatGPT into a super-smart personal assistant for work, capable of performing tasks in the user’s style, incorporating the latest data, and potentially being marketed as a standalone product or part of a software suite.
OpenAI’s efforts complement trends where companies like Google and startups are working towards AI agents capable of carrying out actions on behalf of users.
Disney takes a $1.5B stake in Epic Games to build an ‘entertainment universe’ with Fortnite
Disney invests $1.5 billion in Epic Games to help create a new open games and entertainment universe, integrating characters and stories from franchises like Marvel, Star Wars, and Disney itself.
This collaboration aims to extend beyond traditional gaming, allowing players to interact, create, and share content within a persistent universe powered by Unreal Engine.
The partnership builds on previous collaborations between Disney and Epic Games, signaling Disney’s largest venture into the gaming world and hinting at future integration of gaming and entertainment experiences.
Google Bard rebrands as ‘Gemini’ with new Android app and Advanced model
Google has renamed its AI and related applications to Gemini, introducing a dedicated Android app and incorporating features formerly known as Duet AI in Google Workspace into the Gemini brand.
Gemini will replace Google Assistant as the default AI assistant on Android devices and is designed to be a comprehensive tool that is conversational, multimodal, and highly helpful.
Alongside the rebranding, Google announced the Gemini Ultra 1.0, a superior version of its large language model available through a new $20-monthly Google One AI Premium plan, aiming to set new benchmarks in AI capabilities.
Microsoft upgrades Copilot with enhanced image editing features, new AI model
Microsoft launched a new version of its Copilot artificial intelligence chatbot, featuring enhanced capabilities for users to create and edit images with natural language prompts.
The update introduces an AI model named Deucalion to enhance the “Balanced” mode of Copilot, promising richer and faster responses, alongside a redesigned user interface for better usability.
Additionally, Microsoft plans to further expand Copilot’s features, hinting at upcoming extensions and plugins to enhance functionality.
A Daily Chronicle of AI Innovations in February 2024 – Day 07: AI Daily News – February 07th, 2024
Apple’s MGIE: Making sky bluer with each prompt
Apple released a new open-source AI model called MGIE(MLLM Guided Image Editing). It has editing capabilities based on natural language instructions. MGIE leverages multimodal large language models to interpret user commands and perform pixel-level image manipulation. It can handle editing tasks like Photoshop-style modifications, optimizations, and local editing.
MGIE integrates MLLMs into image editing in two ways. First, it uses MLLMs to understand the user input, deriving expressive instructions. For example, if the user input is “make sky more blue,” the AI model creates an instruction, “increase the saturation of sky region by 20%.” The second usage of MLLM is to generate the output image.
Why does this matter?
MGIE from Apple is a breakthrough in the field of instruction-based image editing. It is an AI model focusing on natural language instructions for image manipulation, boosting creativity and accuracy. MGIE is also a testament to the AI prowess that Apple is developing, and it will be interesting to see how it leverages such innovations for upcoming products.
Meta will label your content if you post an AI-generated image
Meta is developing advanced tools to label metadata for each image posted on their platforms like Instagram, Facebook, and Threads. Labeling will be aligned with “AI-generated” information in the C2PA and IPTC technical standards. These standards will allow Meta to detect AI-generated images from other platforms like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
Meta wants to differentiate between human-generated and AI-generated content on its platform to reduce misinformation. However, this tool is also limited, as it can only detect still images. So, AI-generated video content still goes undetected on Meta platforms.
Why does this matter?
The level of misinformation and deepfakes generated by AI has been alarming. Meta is taking a step closer to reducing misinformation by labeling metadata and declaring which images are AI-generated. It also aligns with the European Union’s push for tech giants like Google and Meta to label AI-generated content.
Abacus AI recently released a new open-source language model called Smaug-72B. It outperforms GPT-3.5 and Mistral Medium in several benchmarks. Smaug 72B is the first open-source model with an average score of over 80 in major LLM evaluations. According to the latest rankings from Hugging Face, It is one of the leading platforms for NLP research and applications.
Smaug 72B is a fine-tuned version of Qwn 72B, a powerful language model developed by a team of researchers at Alibaba Group. It helps enterprises solve complex problems by leveraging AI capabilities and enhancing automation.
Why does this matter?
Smaug 72B is the first open-source model to achieve an average score of 80 on the Hugging Face Open LLM leaderboard. It is a breakthrough for enterprises, startups, and small businesses, breaking the monopoly of big tech companies over AI innovations.
What Else Is Happening in AI on February 07th, 2024
OpenAI introduces watermarks to DALL-E 3 for content credentials.
OpenAI has added watermarks to the image metadata, enhancing content authenticity. These watermarks will distinguish between human and AI-generated content verified through websites like “Content Credentials Verify.” Watermarks will be added to images from the ChatGPT website and DALL-E 3 API, which will be visible to mobile users starting February 12th. However, the feature is limited to still images only. (Link)
Microsoft introduces Face Check for secure identity verification.
Microsoft has unveiled “Face Check,” a new facial recognition feature, as part of its Entra Verified ID digital identity platform. Face Check provides an additional layer of security for identity verification by matching a user’s real-time selfie with their government ID or employee credentials. Azure AI services power face check and aims to enhance security while respecting privacy and compliance through a partnership approach. Microsoft’s partner BEMO has already implemented Face Check for employee verification(Link)
Stability AI has launched an upgraded version of its Stable Video Diffusion (SVD).
Stability AI has launched SVD 1.1, an upgraded version of its image-to-video latent diffusion model, Stable Video Diffusion (SVD). This new model generates 4-second, 25-frame videos at 1024×576 resolution with improved motion and consistency compared to the original SVD. It is available via Hugging Face and Stability AI subscriptions. (Link)
CheXagent has introduced a new AI model for automated chest X-ray interpretation.
CheXagent, developed in partnership with Stability AI by Stanford University, is a foundation model for chest X-ray interpretation. It automates the analysis and summary of chest X-ray images for clinical decision-making. CheXagent combines a clinical language model, a vision encoder, and a network to bridge vision and language. CheXbench is available to evaluate the performance of foundation models on chest X-ray interpretation tasks. (Link)
LinkedIn launched an AI feature to introduce users to new connections.
LinkedIn launched a new AI feature that helps users start conversations. Premium subscribers can use this feature when sending messages to others. The AI uses information from the subscriber’s and the other person’s profiles to suggest what to say, like an introduction or asking about their work experience. This feature was initially available for recruiters and has now been expanded to help users find jobs and summarize posts in their feeds. (Link)
Apple releases a new AI model
Apple has released “MGIE,” an open-source AI model for instruction-based image editing, utilizing multimodal large language models to interpret instructions and manipulate images.
MGIE offers features like Photoshop-style modification, global photo optimization, and local editing, and can be used through a web demo or integrated into applications.
The model is available as an open-source project on GitHub and Hugging Face Spaces.
Apple still working on foldable iPhones and iPads
Apple is developing “at least two” foldable iPhone prototypes inspired by the design of Samsung’s Galaxy Z Flip, though production is not planned for 2024 or 2025.
The company faces challenges in creating a foldable iPhone that matches the thinness of current models while accommodating battery and display needs.
Apple is also working on a folding iPad, approximately the size of an iPad Mini, aiming to launch a seven- or eight-inch model around 2026 or 2027.
Deepfake ‘face swap’ attacks surged 704% last year, study finds. Link
Deepfake “face swap” attacks increased by 704% from the first to the second half of 2023, as reported by iProov, a British biometric firm.
The surge in attacks is attributed to the growing ease of access to generative AI tools, making sophisticated face swaps both user-friendly and affordable.
Deepfake scams, including a notable case involving a finance worker in Hong Kong losing $25mln, highlight the significant threat posed by these technologies.
Humanity’s most distant space probe jeopardized by computer glitch
A computer glitch that began on November 14 has compromised Voyager 1’s ability to send back telemetry data, affecting insight into the spacecraft’s condition.
The glitch is suspected to be due to a corrupted memory bit in the Flight Data Subsystem, making it challenging to determine the exact cause without detailed data.
Despite the issue, signals received indicate Voyager 1 is still operational and receiving commands, with efforts ongoing to resolve the telemetry data problem.
A Daily Chronicle of AI Innovations in February 2024 – Day 06: AI Daily News – February 06th, 2024
Qwen 1.5: Alibaba’s 72 B, multilingual Gen AI model
Alibaba has released Qwen 1.5, the latest iteration of its open-source generative AI model series. Key upgrades include expanded model sizes up to 72 billion parameters, integration with HuggingFace Transformers for easier use, and multilingual capabilities covering 12 languages.
Comprehensive benchmarks demonstrate significant performance gains over the previous Qwen version across metrics like reasoning, human preference alignment, and long-context understanding. They compared Qwen1.5-72B-Chat with GPT-3.5, and the results are shown below:
The unified release aims to provide researchers and developers an advanced foundation model for possible downstream applications. Quantized versions allow low-resource deployment. Overall, Qwen 1.5 represents steady progress towards Alibaba’s goal of creating a “truly ‘good” generative model aligned with ethical objectives.
Why does this matter?
This release signals Alibaba’s intent to compete with Big Tech firms in steering the AI race. The upgraded model enables researchers and developers to create more capable assistants and tools. Qwen 1.5’s advancements could enhance education, healthcare, and sustainability solutions.
AI software reads ancient words unseen since Caesar’s era
Nat Friedman (former CEO of Github) uses AI to decode ancient Herculaneum scrolls charred in the 79AD eruption of Mount Vesuvius. These unreadable scrolls are believed to contain a vast trove of texts that could reshape our view of figures like Caesar and Jesus Christ. Past failed attempts to unwrap them physically led Brent Seales to pioneer 3D scanning methods. However, the initial software struggled with the complexity.
A $1 million AI contest was launched ten months ago, attracting coders worldwide. Contestants developed new techniques, exposing ink patterns invisible to the human eye. The winning method by Luke Farritor and the team successfully reconstructed over a dozen readable columns of Greek text from one scroll. While not yet revelatory, this breakthrough after centuries has scholars hopeful more scrolls can now be unveiled using similar AI techniques, potentially surfacing lost ancient works.
Why does this matter?
The ability to reconstruct lost ancient knowledge illustrates AI’s immense potential to reveal invisible insights. Just like how technology helps discover hidden oil resources, AI could unearth ‘info treasures’ expanding our history, science, and literary canons. These breakthroughs capture the public imagination and signal a new data-uncovering AI industry.
Roblox users can chat cross-lingually in milliseconds
Roblox has developed a real-time multilingual chat translation system, allowing users speaking different languages to communicate seamlessly while gaming. It required building a high-speed unified model covering 16 languages rather than separate models. Comprehensive benchmarks show the model outperforms commercial APIs in translating Roblox slang and linguistic nuances.
The sub-100 millisecond translation latency enables genuine cross-lingual conversations. Roblox aims to eventually support all linguistic communities on its platform as translation capabilities expand. Long-term goals include exploring automatic voice chat translation to better convey tone and emotion. Overall, the specialized AI showcases Roblox’s commitment to connecting diverse users globally by removing language barriers.
Why does this matter?
It showcases AI furthering connection and community-building online, much like transport innovations expanding in-person interactions. Allowing seamless cross-cultural communication at scale illustrates tech removing barriers to global understanding. Platforms facilitating positive societal impacts can inspire user loyalty amid competitive dynamics.
What Else Is Happening in AI on February 06th, 2024
Semafor tests AI for responsible reporting
News startup Semafor launched a product called Signals – AI-aided curation of top stories by its reporters. An internal search tool helps uncover diverse sources in multiple languages. This showcases responsibly leveraging AI to enhance human judgment as publishers adapt to changes in consumer web habits. (Link)
Bumble’s new AI feature sniffs out fakes for safer matchmaking
Bumble has launched a new AI tool called Deception Detector to proactively identify and block fake profiles and scams. Testing showed it automatically blocked 95% of spam accounts, reducing user reports by 45%. This builds on Bumble’s efforts to use AI to make its dating and friend-finding platforms safer. (Link)
Huawei repurposes factory to prioritize AI chip production over its bestselling phones
Huawei is slowing production of its popular Mate 60 phones to ramp up manufacturing of its Ascend AI chips instead, due to growing domestic demand. This positions Huawei to boost China’s AI industry, given US export controls limiting availability of chips like Nvidia’s. It shows the strategic priority of AI for Huawei and China overall. (Link)
UK to spend $125M+ to tackle challenges around AI
The UK government will invest over $125 million to support responsible AI development and position the UK as an AI leader. This will fund new university research hubs across the UK, a partnership with the US on the responsible use of AI, regulators overseeing AI, and 21 projects to develop ML technologies to drive productivity. (Link)
Europ Assistance partnered with TCS to boost IT operations with AI
Europ Assistance, a leading global assistance and travel insurance company, has selected TCS as its strategic partner to transform its IT operations using AI. By providing real-time insights into Europ Assistance’s technology stack, TCS will support their business growth, improve customer service delivery, and enable the company to achieve its mission of providing “Anytime, Anywhere” services across 200+ countries. (Link)
AI reveals hidden text of 2,000-year-old scroll
A group of classical scholars, assisted by three computer scientists, has partially decoded a Roman scroll buried in the Vesuvius eruption in A.D. 79 using artificial intelligence and X-ray technology.
The scroll, part of the Herculaneum Papyri, is believed to contain texts by Philodemus on topics like food and music, revealing insights into ancient Roman life.
The breakthrough, facilitated by a $700,000 prize from the Vesuvius Challenge, led to the reading of over 2,000 Greek letters from the scroll, with hopes to decode 85% of it by the end of the year.
Adam Neumann wants to buy WeWork
Adam Neumann, ousted CEO and co-founder of WeWork, expressed interest in buying the company out of bankruptcy, claiming WeWork has ignored his attempts to get more information for a bid.
Neumann’s intent to purchase WeWork has been supported by funding from Dan Loeb’s hedge fund Third Point since December 2023, though WeWork has shown disinterest in his offer.
Despite WeWork’s bankruptcy and prior refusal of a $1 billion funding offer from Neumann in October 2022, Neumann believes his acquisition could offer valuable synergies and management expertise.
Midjourney hires veteran Apple engineer to build its ‘Orb’
Generative AI startup Midjourney has appointed Ahmad Abbas, a former Apple Vision Pro engineer, as head of hardware to potentially develop a project known as the ‘Orb’ focusing on 3D data capture and AI-generated content.
Abbas has extensive experience in hardware engineering, including his time at Apple and Elon Musk’s Neuralink, and has previously worked with Midjourney’s founder, David Holz, at Leap Motion.
While details are scarce, the ‘Orb’ may relate to generating and managing 3D environments and could signify Midjourney’s entry into creating hardware aimed at real-time generated video games and AI-powered 3D worlds.
Meta to start labeling AI-generated images
Meta is expanding the labeling of AI-generated imagery on its platforms, including content created with rivals’ tools, to improve transparency and detection of synthetic content.
The company already labels images created by its own “Imagine with Meta” tool but plans to extend this to images generated by other companies’ tools, focusing on elections around the world.
Meta is also exploring the use of generative AI in content moderation, while acknowledging challenges in detecting AI-generated videos and audio, and aims to require user disclosure for synthetic content.
Bluesky opens its doors to the public
Bluesky, funded by Twitter co-founder Jack Dorsey and aiming to offer an alternative to Elon Musk’s X, is now open to the public after being invite-only for nearly a year.
The platform, notable for its decentralized infrastructure called the AT Protocol and open-source code, allows developers and users greater control and customization, including over content moderation.
Bluesky challenges existing social networks with its focus on user experience and is preparing to introduce open federation and content moderation tools to enhance its decentralized social media model.
Bumble’s new AI tool identifies and blocks scam accounts, fake profiles
Bumble has introduced a new AI tool named Deception Detector to identify and block scam accounts and fake profiles, which during tests blocked 95% of such accounts and reduced user reports of spam by 45%.
The development of Deception Detector is in response to user concerns about fake profiles and scams on dating platforms, with Bumble research highlighting these as major issues for users, especially women.
Besides Deception Detector, Bumble continues to enhance user safety and trust through features like Private Detector for blurring unsolicited nude images and AI-generated icebreakers in Bumble For Friends.
A Daily Chronicle of AI Innovations in February 2024 – Day 05: AI Daily News – February 05th, 2024
How to access Google Bard in Canada as of February 05th, 2024
TLDR: ChatGPT helped me jump start my hybrid to avoid towing fee $100 and helped me not pay the diagnostic fee $150 at the shop.
My car wouldn’t start this morning and it gave me a warning light and message on the car’s screen. I took a picture of the screen with my phone, uploaded it to ChatGPT 4 Turbo, described the make/model, my situation (weather, location, parked on slope), and the last time it had been serviced.
I asked what was wrong, and it told me that the auxiliary battery was dead, so I asked it how to jump start it. It’s a hybrid, so it told me to open the fuse box, ground the cable and connect to the battery. I took a picture of the fuse box because I didn’t know where to connect, and it told me that ground is usually black and the other part is usually red. I connected it and it started up. I drove it to the shop, so it saved me the $100 towing fee. At the shop, I told them to replace my battery without charging me the $150 “diagnostic fee,” since ChatGPT already told me the issue. The hybrid battery wasn’t the issue because I took a picture of the battery usage with 4 out of 5 bars. Also, there was no warning light. This saved me $250 in total, and it basically paid for itself for a year.
I can deal with some inconveniences related to copyright and other concerns as long as I’m saving real money. I’ll keep my subscription, because it’s pretty handy. Thanks for reading!
source: r/artificialintelligence
Top comment: I can’t wait until AI like this is completely integrated into a home system like Alexa, and we have a friendly voice that just walks us through everything.
Google MobileDiffusion: AI Image generation in <1s on phones
Google Research introduced MobileDifussion, which can generate images from Android and iPhone with a resolution of 512*512 pixels in about half a second. What’s impressive about this is its comparably small model size of just 520M parameters, which makes it uniquely suited for mobile deployment. This is significantly less than the Stable Diffusion and SDX, which boast a billion parameters.
MobileDiffusion has the capability to enable a rapid image generation experience while typing text prompts.
Google researchers measured the performance of MobileDiffusion on both iOS and Android devices using different runtime optimizers.
Why does this matter?
MobileDifussion represents a paradigm shift in the AI image generation horizon, especially in the smartphone or mobile space. Image generation models like Stable Diffusion and DALL-E are billions of parameters in size and require powerful desktops or servers to run, making them impossible to run on a handset. With superior efficiency in terms of latency and size, MobileDiffusion has the potential to be a friendly option for mobile deployments.
Hugging Face enables custom chatbot creation in 2-clicks
Hugging Face tech lead Philipp Schmid said users can now create custom chatbots in “two clicks” using “Hugging Chat Assistant.” Users’ creations are then publicly available. Schmid compares the feature to OpenAI’s GPTs feature and adds they can use “any available open LLM, like Llama2 or Mixtral.”
Why does this matter?
Hugging Face’s Chat Assistant has democratized AI creation and simplified the process of building custom chatbots, lowering the barrier to entry. Also, open-source means more innovation, enabling a more comprehensive range of individuals and organizations to harness the power of conversational AI.
Google to release ChatGPT Plus competitor ‘Gemini Advanced’ next week
According to a leaked web text, Google might release its ChatGPT Plus competitor named “Gemini Advanced” on February 7th. This suggests a name change for the Bard chatbot after Google announced “Bard Advanced” at the end of last year. The Gemini Advanced ChatBot will be powered by the eponymous Gemini model in the Ultra 1.0 release.
According to Google, Gemini Advanced is far more capable of complex tasks like coding, logical reasoning, following nuanced instructions, and creative collaboration. Google also wants to include multimodal capabilities, coding features, and detailed data analysis. Currently, the model is optimized for English but can respond to other global languages sooner.
Why does this matter?
Google’s Gemini Advanced will be an answer for OpenAI’s ChatGPT Plus. It signals increasing competition in the AI language model market, potentially leading to improved features and services for users. The only question is whether Ultra can beat GPT-4, and if that’s the case, what counters can OpenAI do that will be interesting to see.
What Else Is Happening in AI on February 05th, 2024
NYU’s latest AI innovation echoes a toddler’s language learning journey
New York University (NYU) researchers have developed an AI system to behave like a toddler and learn a new language precisely. For this purpose, the AI model uses video recording from a child’s perspective to understand the language and its meaning, respond to new situations, and learn from new experiences. (Link)
GenAI to disrupt 200K U.S. entertainment industry jobs by 2026
CVL Economics surveyed 300 executives from six U.S. entertainment industries between Nov 17 and Dec 22, 2023, to understand the impact of Generative AI. The survey found that 203,800 jobs could get disrupted in the entertainment space by 2026. 72% of the companies surveyed are early adopters, of which 25% already use it, and 47% plan to implement it soon. (Link)
Apple CEO Tim Cook hints at major AI announcement ‘later this year’
Apple CEO Tim Cook hinted at Apple making a major AI announcement later this year during a meeting with the analysts during the first-quarter earnings showcase. He further added that there’s a massive opportunity for Apple with Gen AI and AI as they look to compete with cutting-edge AI companies like Microsoft, Google, Amazon, OpenAI, etc. (Link)
The U.S. Police Department turns to AI to review bodycam footage
Over the last decade, U.S. police departments have spent millions of dollars to equip their officers with body-worn cameras that record their daily work. However, the data collected needs to be adequately analyzed to identify patterns. Now, the department is turning to AI to examine this stockpile of footage to identify problematic officers and patterns of behavior. (Link)
Adobe to provide support for Firefly in the latest Vision Pro release
Adobe’s popular image-generating software, Firefly, is now announced for the new version of Apple Vision Pro. It now joins the company’s previously announced Lightroom photo app. People expected Adobe Lightroom to be a native Apple Vision Pro app from launch, but now it’s adding Firefly AI, the GenAI tool that produces images based on text descriptions. (Link)
Deepfake costs company $25 million
Scammers utilized AI-generated deepfakes to impersonate a multinational company’s CFO in a video call, tricking an employee into transferring over $25 million.
The scam involved deepfake representations of the CFO and senior executives, leading the employee to believe the request for a large money transfer was legitimate.
Hong Kong police have encountered over 20 cases involving AI deepfakes to bypass facial recognition, emphasizing the increasing abuse of deepfake technology in fraud and identity theft. Read more.
Amazon finds $1B jackpot in its 100 million+ IPv4 address stockpile
The scarcity of IPv4 addresses, akin to digital real estate, has led Amazon Web Services (AWS) to implement a new pricing scheme charging $0.005 per public IPv4 address per hour, opening up a significant revenue stream.
With IPv4 addresses running out due to the limit of 4.3 billion unique IDs and increasing demand from the growth of smart devices, AWS urges a transition to IPv6 to alleviate shortage and high administrative costs.
Amazon controls nearly 132 million IPv4 addresses, with an estimated valuation of $4.6 billion; the new pricing strategy could generate between $400 million to $1 billion annually from their use in AWS services.
Meta oversight board calls company’s deepfake rule ‘incoherent’
The Oversight Board criticizes Meta’s current rules against faked videos as “incoherent” and urges the company to urgently revise its policy to better prevent harm from manipulated media.
It suggests that Meta should not only focus on how manipulated content is created but should also add labels to altered videos to inform users, rather than just relying on fact-checkers.
Meta is reviewing the Oversight Board’s recommendations and will respond publicly within 60 days, while the altered video of President Biden continues to spread on other platforms like X (formerly Twitter).
Snap lays off 10% of workforce to ‘reduce hierarchy’
Snapchat’s parent company, Snap, announced plans to lay off 10% of its workforce, impacting over 500 employees, as part of a restructuring effort to promote growth and reduce hierarchy.
The layoffs will result in pre-tax charges estimated between $55 million to $75 million, primarily for severance and related costs, with the majority of these costs expected in the first quarter of 2024.
The decision for a second wave of layoffs comes after a previous reorganization focused on reducing layers within the product team and follows a reported increase in user growth and a net loss in Q3 earnings
First UK patients receive experimental messenger RNA cancer therapy
A revolutionary new cancer treatment known as mRNA therapy has been administered to patients at Hammersmith hospital in west London. The trial has been set up to evaluate the therapy’s safety and effectiveness in treating melanoma, lung cancer and other solid tumours.
The new treatment uses genetic material known as messenger RNA – or mRNA – and works by presenting common markers from tumours to the patient’s immune system.
The aim is to help it recognise and fight cancer cells that express those markers.
“New mRNA-based cancer immunotherapies offer an avenue for recruiting the patient’s own immune system to fight their cancer,” said Dr David Pinato of Imperial College London, an investigator with the trial’s UK arm.
Pinato said this research was still in its early stages and could take years before becoming available for patients. However, the new trial was laying crucial groundwork that could help develop less toxic and more precise new anti-cancer therapies. “We desperately need these to turn the tide against cancer,” he added.
A number of cancer vaccines have recently entered clinical trials across the globe. These fall into two categories: personalised cancer immunotherapies, which rely on extracting a patient’s own genetic material from their tumours; and therapeutic cancer immunotherapies, such as the mRNA therapy newly launched in London, which are “ready made” and tailored to a particular type of cancer.
The primary aim of the new trial – known as Mobilize – is to discover if this particular type of mRNA therapy is safe and tolerated by patients with lung or skin cancers and can shrink tumours. It will be administered alone in some cases and in combination with the existing cancer drug pembrolizumab in others.
Researchers say that while the experimental therapy is still in the early stages of testing, they hope it may ultimately lead to a new treatment option for difficult-to-treat cancers, should the approach be proven to be safe and effective.
Nearly one in two people in the UK will be diagnosed with cancer in their lifetime. A range of therapies have been developed to treat patients, including chemotherapy and immune therapies.
However, cancer cells can become resistant to drugs, making tumours more difficult to treat, and scientists are keen to seek new approaches for tackling cancers.
Preclinical testing in both cell and animal models of cancer provided evidence that new mRNA therapy had an effect on the immune system and could be offered to patients in early-phase clinical trials.
AI Coding Assistant Tools in 2024 Compared
The article explores and compares most popular AI coding assistants, examining their features, benefits, and transformative impact on developers, enabling them to write better code: 10 Best AI Coding Assistant Tools in 2024
GitHub Copilot
CodiumAI
Tabnine
MutableAI
Amazon CodeWhisperer
AskCodi
Codiga
Replit
CodeT5
OpenAI Codex
Challenges for programmers
Programmers and developers face various challenges when writing code. Outlined below are several common challenges experienced by developers.
Syntax and Language Complexity: Programming languages often have intricate syntax rules and a steep learning curve. Understanding and applying the correct syntax can be challenging, especially for beginners or when working with unfamiliar languages.
Bugs and Errors: Debugging is an essential part of the coding process. Identifying and fixing bugs and errors can be time-consuming and mentally demanding. It requires careful analysis of code behavior, tracing variables, and understanding the flow of execution.
Code Efficiency and Performance: Writing code that is efficient, optimized, and performs well can be a challenge. Developers must consider algorithmic complexity, memory management, and resource utilization to ensure their code runs smoothly, especially in resource-constrained environments.
Compatibility and Integration: Integrating different components, libraries, or third-party APIs can introduce compatibility challenges. Ensuring all the pieces work seamlessly together and correctly handle data interchangeably can be complex.
Scaling and Maintainability: As projects grow, managing and scaling code becomes more challenging. Ensuring code remains maintainable, modular, and scalable can require careful design decisions and adherence to best practices.
Collaboration and Version Control: Coordinating efforts, managing code changes, and resolving conflicts can be significant challenges when working in teams. Ensuring proper version control and effective collaboration becomes crucial to maintain a consistent and productive workflow.
Time and Deadline Constraints: Developers often work under tight deadlines, adding pressure to the coding process. Balancing speed and quality becomes essential, and delivering code within specified timelines can be challenging.
Keeping Up with Technological Advancements: The technology landscape continually evolves, with new frameworks, languages, and tools emerging regularly. Continuous learning and adaptation pose ongoing challenges for developers in their professional journey.
Documentation and Code Readability: Writing clear, concise, and well-documented code is essential for seamless collaboration and ease of future maintenance. Ensuring code readability and comprehensibility can be challenging, especially when codebases become large and complex.
Security and Vulnerability Mitigation: Building secure software requires careful consideration of potential vulnerabilities and implementing appropriate security measures. Addressing security concerns, protecting against cyber threats, and ensuring data privacy can be challenging aspects of coding.
Now let’s see how this type of tool can help developers to avoid these challenges.
Advantages of using these tools
Reduce Syntax and Language Complexity: These tools help programmers tackle the complexity of programming languages by providing real-time suggestions and corrections for syntax errors. It assists in identifying and rectifying common mistakes such as missing brackets, semicolons, or mismatched parentheses.
Autocompletion and Intelligent Code Suggestions: It excels at autocompleting code snippets, saving developers time and effort. They analyze the context of the written code and provide intelligent suggestions for completing code statements, variables, method names, or function parameters. These suggestions are contextually relevant and can significantly speed up the coding process, reduce typos, and improve code accuracy.
Error Detection and Debugging Assistance: AI Code assistants can assist in detecting and resolving errors in code. They analyze the code in real time, flagging potential errors or bugs and providing suggestions for fixing them. By offering insights into the root causes of errors, suggesting potential solutions, or providing links to relevant documentation, these tools facilitate debugging and help programmers identify and resolve issues more efficiently.
Code Efficiency and Performance Optimization: These tools can aid programmers in optimizing their code for efficiency and performance. They can analyze code snippets and identify areas that could be improved, such as inefficient algorithms, redundant loops, or suboptimal data structures. By suggesting code refactorings or alternative implementations, developers write more efficient code, consume fewer resources, and perform better.
Compatibility and Integration Support: This type of tool can assist by suggesting compatible libraries or APIs based on the project’s requirements. They can also help with code snippets or guide seamlessly integrating specific functionalities. This support ensures smoother integration of different components, reducing potential compatibility issues and saving developers time and effort.
Code Refactoring and Improvement Suggestions: It can analyze existing codebases and suggest refactoring and improving code quality. They can identify sections of code that are convoluted, difficult to understand or violate best practices. Through this, programmers enhance code maintainability, readability, and performance by suggesting more readable, modular, or optimized alternatives.
Collaboration and Version Control Management: Users can integrate with version control systems and provide conflict resolution suggestions to minimize conflicts during code merging. They can also assist in tracking changes, highlighting modifications made by different team members, and ensuring smooth collaboration within a project.
Documentation and Code Readability Enhancement: These tools can assist in improving code documentation and readability. They can prompt developers to add comments, provide documentation templates, or suggest more precise variable and function names. By encouraging consistent documentation practices and promoting readable code, this tool can facilitate code comprehension, maintainability, and ease of future development.
Learning and Keeping Up with Technological Advancements: These tools can act as learning companions for programmers. They can provide documentation references, code examples, or tutorials to help developers understand new programming concepts, frameworks, or libraries. So developers can stay updated with the latest technological advancements and broaden their knowledge base.
Security and Vulnerability Mitigation: It can help programmers address security concerns by providing suggestions and best practices for secure coding. They can flag potential security vulnerabilities, such as injection attacks or sensitive data exposure, and offer guidance on mitigating them.
GitHub Copilot
GitHub Copilot, developed by GitHub in collaboration with OpenAI, aims to transform the coding experience with its advanced features and capabilities. It utilizes the potential of AI and machine learning to enhance developers’ coding efficiency, offering a variety of features to facilitate more efficient code writing.
Features:
Integration with Popular IDEs: It integrates with popular IDEs like Visual Studio, Neovim, Visual Studio Code, and JetBrains for a smooth development experience.
Support for multiple languages: Supports various languages such as TypeScript, Golang, Python, Ruby, etc.
Code Suggestions and Function Generation: Provides intelligent code suggestions while developers write code, offering snippets or entire functions to expedite the coding process and improve efficiency.
Easy Auto-complete Navigation: Cycle through multiple auto-complete suggestions with ease, allowing them to explore different options and select the most suitable suggestion for their code.
While having those features, Github Copilot includes some weaknesses that need to be considered when using it.
Code Duplication: GitHub Copilot generates code based on patterns it has learned from various sources. This can lead to code duplication, where developers may unintentionally use similar or identical code segments in different parts of their projects.
Inefficient code: It sometimes generates code that is incorrect or inefficient. This can be a problem, especially for inexperienced developers who may not be able to spot the errors.
Insufficient test case generation:When writing bigger codes, developers may start to lose touch with their code. So testing the code is a must. Copilot may lack the ability to generate a sufficient number of test cases for bigger codes. This can make it more difficult to identify and debug problems and to ensure the code’s quality.
Amazon CodeWhisperer
Amazon CodeWhisperer boosts developers’ coding speed and accuracy, enabling faster and more precise code writing. Amazon’s AI technology powers it and can suggest code, complete functions, and generate documentation.
Features:
Code suggestion: Offers code snippets, functions, and even complete classes based on the context of your code, providing relevant and contextually accurate suggestions. This aids in saving time and mitigating errors, resulting in a more efficient and reliable coding process.
Function completion: Helps complete functions by suggesting the following line of code or by filling in the entire function body.
Documentation generation: Generates documentation for the code, including function summaries, parameter descriptions, and return values.
Security scanning: It scans the code to identify possible security vulnerabilities. This aids in preemptively resolving security concerns, averting potential issues.
Language support: Available for various programming languages, including Python, JavaScript, C#, Rust, PHP, Kotlin, C, SQL, etc.
Integration with IDEs: It can be used with JetBrains IDEs, VS Code and more.
OpenAI Codex
This tool offers quick setup, AI-driven code completion, and natural language prompting, making it easier for developers to write code efficiently and effectively while interacting with the AI using plain English instructions.
Features:
Quick Setup: OpenAI Codex provides a user-friendly and efficient setup process, allowing developers to use the tool quickly and seamlessly.
AI Code Completion Tool: Codex offers advanced AI-powered code completion, providing accurate and contextually relevant suggestions to expedite the coding process and improve productivity.
Natural Language Prompting: With natural language prompting, Codex enables developers to interact with the AI more intuitively, providing instructions and receiving code suggestions based on plain English descriptions.
AI Weekly Rundown (January 27 to February 04th, 2024)
Major AI announcements from OpenAI, Google, Meta, Amazon, Apple, Adobe, Shopify, and more.
OpenAI announced new upgrades to GPT models + new features leaked – They are releasing 2 new embedding models – Updated GPT-3.5 Turbo with 50% cost drop – Updated GPT-4 Turbo preview model – Updated text moderation model – Introducing new ways for developers to manage API keys and understand API usage – Quietly implemented a new ‘GPT mentions’ feature to ChatGPT (no official announcement yet). The feature allows users to integrate GPTs into a conversation by tagging them with an ‘@’.
Prophetic introduces Morpheus-1, world’s 1st ‘multimodal generative ultrasonic transformer’ – This innovative AI device is crafted with the purpose of delving into the intricacies of human consciousness by facilitating control over lucid dreams. Morpheus-1 operates by monitoring sleep phases and gathering dream data to enhance its AI model. It is set to be accessible to beta users in the spring of 2024.
Google MobileDiffusion: AI Image generation in <1s on phones – MobileDiffusion is Google’s new text-to-image tool tailored for smartphones. It swiftly generates top-notch images from text in under a second. With just 520 million parameters, it’s notably smaller than other models like Stable Diffusion and SDXL, making it ideal for mobile use.
New paper on MultiModal LLMs introduces over 200 research cases + 20 multimodal LLMs – This paper ‘MM-LLMs’ discusses recent advancements in MultiModal LLMs which combine language understanding with multimodal inputs or outputs. The authors provide an overview of the design and training of MM-LLMs, introduce 26 existing models, and review their performance on various benchmarks. They also share key training techniques to improve MM-LLMs and suggest future research directions.
Hugging Face enables custom chatbot creation in 2-clicks – The tech lead of Hugging Face, Philipp Schmid, revealed that users can now create their own chatbot in “two clicks” using the “Hugging Chat Assistant.” The creation made by the users will be publicly available to the rest of the community.
Meta released Code Llama 70B- a new, more performant version of its LLM for code generation. It is available under the same license as previous Code Llama models. CodeLlama-70B-Instruct achieves 67.8 on HumanEval, beating GPT-4 and Gemini Pro.
Elon Musk’s Neuralink implants its brain chip in the first human – Musk’s brain-machine interface startup, Neuralink, has successfully implanted its brain chip in a human. In a post on X, he said “promising” brain activity had been detected after the procedure and the patient was “recovering well”.
Google to release ChatGPT Plus competitor ‘Gemini Advanced’ next week – Google might release its ChatGPT Plus competitor “Gemini Advanced” on February 7th. It suggests a name change for the Bard chatbot, after Google announced “Bard Advanced” at the end of last year. The Gemini Advanced Chatbot will be powered by eponymous Gemini model in the Ultra 1.0 release.
Alibaba announces Qwen-VL; beats GPT-4V and Gemini – Alibaba’s Qwen-VL series has undergone a significant upgrade with the launch of two enhanced versions, Qwen-VL-Plus and Qwen-VL-Max.These two models perform on par with Gemini Ultra and GPT-4V in multiple text-image multimodal tasks.
GenAI to disrupt 200K U.S. entertainment industry jobs by 2026 – CVL Economics surveyed 300 executives from six U.S. entertainment industries between Nov 17 and Dec 22, 2023, to understand the impact of Generative AI. The survey found that 203,800 jobs could get disrupted in the entertainment space by 2026.
Apple CEO Tim Cook hints at major AI announcement ‘later this year’ – Apple CEO Tim Cook hinted at Apple making a major AI announcement later this year during a meeting with the analysts during the first-quarter earnings showcase. He further added that there’s a massive opportunity for Apple in Gen AI and AI horizon.
Microsoft released its annual ‘Future of Work 2023’ report with a focus on AI – It highlights the 2 major shifts in how work is done in the past three years, driven by remote and hybrid work technologies and the advancement of Gen AI. This year’s edition focuses on integrating LLMs into work and offers a unique perspective on areas that deserve attention.
Amazon researchers have developed “Diffuse to Choose” AI tool – It’s a new image inpainting model that combines the strengths of diffusion models and personalization-driven models, It allows customers to virtually place products from online stores into their homes to visualize fit and appearance in real-time.
Cambridge researchers developed a robotic sensor reading braille 2x faster than humans – The sensor, which incorporates AI techniques, was able to read braille at 315 words per minute with 90% accuracy. It makes it ideal for testing the development of robot hands or prosthetics with comparable sensitivity to human fingertips.
Shopify boosts its commerce platform with AI enhancements – Shopify is releasing new features for its Winter Edition rollout, including an AI-powered media editor, improved semantic search, ad targeting with AI, and more. The headline feature is Shopify Magic, which applies different AI models to assist merchants in various ways.
OpenAI is building an early warning system for LLM-aided biological threat creation – In an evaluation involving both biology experts and students, it found that GPT-4 provides at most a mild uplift in biological threat creation accuracy. While this uplift is not large enough to be conclusive, the finding is a starting point for continued research and community deliberation.
LLaVA-1.6 released with improved reasoning, OCR, and world knowledge – It supports higher-res inputs, more tasks, and exceeds Gemini Pro on several benchmarks. It maintains the data efficiency of LLaVA-1.5, and LLaVA-1.6-34B is trained ~1 day with 32 A100s. LLaVA-1.6 comes with base LLMs of different sizes: Mistral-7B, Vicuna-7B/13B, Hermes-Yi-34B.
Google rolls out huge AI updates:
Launches an AI image generator – ImageFX- It allows users to create and edit images using a prompt-based UI. It offers an “expressive chips” feature, which provides keyword suggestions to experiment with different dimensions of image creation. Google claims to have implemented technical safeguards to prevent the tool from being used for abusive or inappropriate content.
Google has released two new AI tools for music creation: MusicFX and TextFX- MusicFX generates music based on user prompts but has limitations with stringed instruments and filters out copyrighted content. TextFX, conversely, is a suite of modules designed to aid in the lyrics-writing process, drawing inspiration from rap artist Lupe Fiasco.
Google’s Bard is now powered by the Gemini Pro globally, supporting 40+ languages- The chatbot will have improved understanding and summarizing content, reasoning, brainstorming, writing, and planning capabilities. Google has also extended support for more than 40 languages in its “Double check” feature, which evaluates if search results are similar to what Bard generates.
Google’s Bard can now generate photos using its Imagen 2 text-to-image model, catching up to its rival ChatGPT Plus- Bard’s image generation feature is free, and Google has implemented safety measures to avoid generating explicit or offensive content.
Google Maps introduces a new AI feature to help users discover new places- The feature uses LLMs to analyze over 250M locations and contributions from over 300M Local Guides. Users can search for specific recommendations, and the AI will generate suggestions based on their preferences. Its currently being rolled out in the US.
Adobe to provide support for Firefly in the latest Vision Pro release – Adobe’s popular image-generating software, Firefly, is now announced for the new version of Apple Vision Pro. It now joins the company’s previously announced Lightroom photo app.
Amazon launches an AI shopping assistant called Rufus in its mobile app – Rufus is trained on Amazon’s product catalog and information from the web, allowing customers to chat with it to help find products, compare them, and get recommendations. The AI assistant will initially be available in beta to select US customers, with plans to expand to more users in the coming weeks.
Meta plans to deploy custom in-house chips later this year to power AI initiatives – It could help reduce the company’s dependence on Nvidia chips and control the costs associated with running AI workloads. It could potentially save hundreds of millions of dollars in annual energy costs and billions in chip purchasing costs. The chip will work in coordination with commercially available GPUs.
And there was more… – Google’s Bard surpasses GPT-4 to the Second spot on the leaderboard – Google Cloud has partnered with Hugging Face to advance Gen AI development – Arc Search combines a browser, search engine, and AI for unique browsing experience – PayPal is set to launch new AI-based products – NYU’s latest AI innovation echoes a toddler’s language learning journey – Apple Podcasts in iOS 17.4 now offers AI transcripts for almost every podcast – OpenAI partners with Common Sense Media to collaborate on AI guidelines – Apple’s ‘biggest’ iOS update may bring a lot of AI to iPhones – Shortwave email client will show AI-powered summaries automatically – OpenAI CEO Sam Altman explores AI chip collaboration with Samsung and SK Group – Generative AI is seen as helping to identify merger & acquisition targets – OpenAI bringing GPTs (AI models) into conversations, Type @ and select the GPT – Midjourney Niji V6 is out – The U.S. Police Department turns to AI to review bodycam footage – Yelp uses AI to provide summary reviews on its iOS app and much more – The New York Times is creating a team to explore the use of AI in its newsroom – Semron aims to replace chip transistors with ‘memcapacitors’ – Microsoft LASERs away LLM inaccuracies with a new method – Mistral CEO confirms ‘leak’ of new open source model nearing GPT-4 performance – Synthesia launches LLM-powered assistant to turn any text file into video in minutes – Fashion forecasters are using AI to make decisions about future trends and styles – Twin Labs automates repetitive tasks by letting AI take over your mouse cursor – The Arc browser is incorporating AI to improve bookmarks and search results – The Allen Institute for AI is open-sourcing its text-generating AI models – Apple CEO Tim Cook confirmed that AI features are coming ‘later this year’ – Scientists use AI to create an early diagnostic test for ovarian cancer – Anthropic launches ‘dark mode’ visual option for its Claude chatbot
A Daily Chronicle of AI Innovations in February 2024 – Day 03: AI Daily News – February 03rd, 2024
Google plans to launch ChatGPT Plus competitor next week
Google is set to launch “Gemini Advanced,” a ChatGPT Plus competitor, possibly on February 7th, signaling a name change from “Bard Advanced” announced last year.
The Gemini Advanced chatbot, powered by the Ultra 1.0 model, aims to excel in complex tasks such as coding, logical reasoning, and creative collaboration.
Gemini Advanced, likely a paid service, aims to outperform ChatGPT by integrating with Google services for task completion and information retrieval, while also incorporating an image generator similar to DALL-E 3 and reaching GPT-4 levels with the Gemini Pro model.
Apple tested its self-driving car tech more than ever last year
Apple significantly increased its autonomous vehicle testing in 2023, almost quadrupling its self-driving miles on California’s public roads compared to the previous year.
The company’s testing peaked in August with 83,900 miles, although it remains behind more advanced companies like Waymo and Cruise in total miles tested.
Apple has reportedly scaled back its ambitions for a fully autonomous vehicle, now focusing on developing automated driving-assistance features similar to those offered by other automakers.
Hugging Face launches open source AI assistant maker to rival OpenAI’s custom GPTs
Hugging Face has launched Hugging Chat Assistants, a free, customizable AI assistant maker that rivals OpenAI’s subscription-based custom GPTs.
The new tool allows users to choose from a variety of open source large language models (LLMs) for their AI assistants, unlike OpenAI’s reliance on proprietary models.
An aggregator page for third-party customized Hugging Chat Assistants mimics OpenAI’s GPT Store, offering users various assistants to choose from and use.
Google’s MobileDiffusion generates AI images on mobile devices in less than a second
Google’s MobileDiffusion enables the creation of high-quality images from text on smartphones in less than a second, leveraging a model that is significantly smaller than existing counterparts.
It achieves this rapid and efficient text-to-image conversion through a novel architecture including a text encoder, a diffusion network, and an image decoder, producing 512 x 512-pixel images swiftly on both Android and iOS devices.
While demonstrating a significant advance in mobile AI capabilities, Google has not yet released MobileDiffusion publicly, viewing this development as a step towards making text-to-image generation widely accessible on mobile platforms.
Meta warns investors Mark Zuckerberg’s hobbies could kill him in SEC filing
Meta warned investors in its latest SEC filing that CEO Mark Zuckerberg’s engagement in “high-risk activities” could result in serious injury or death, impacting the company’s operations.
The company’s 10-K filing listed combat sports, extreme sports, and recreational aviation as risky hobbies of Zuckerberg, noting his achievements in Brazilian jiu-jitsu and pursuit of a pilot’s license.
This cautionary statement, highlighting the potential risks of Zuckerberg’s personal hobbies to Meta’s future, was newly included in the 2023 filing and is a departure from the company’s previous filings.
A Daily Chronicle of AI Innovations in February 2024 – Day 02: AI Daily News – February 02nd, 2024
Google bets big on AI with huge upgrades
1.Launches an AI image generator – ImageFX
It allows users to create and edit images using a prompt-based UI. It offers an “expressive chips” feature, which provides keyword suggestions to experiment with different dimensions of image creation. Google claims to have implemented technical safeguards to prevent the tool from being used for abusive or inappropriate content.
Additionally, images generated using ImageFX will be tagged with a digital watermark called SynthID for identification purposes. Google is also expanding the use of Imagen 2, the image model, across its products and services.
3. Google’s Bard is now Gemini Pro-powered globally, supporting 40+ languages The chatbot will have improved understanding and summarizing content, reasoning, brainstorming, writing, and planning capabilities. Google has also extended support for more than 40 languages in its “Double check” feature, which evaluates if search results are similar to what Bard generates.
4. Google’s Bard can now generate photos using its Imagen 2 text-to-image model Bard’s image generation feature is free, and Google has implemented safety measures to avoid generating explicit or offensive content.
5. Google Maps introduces a new AI feature to help users discover new places The feature uses LLMs to analyze over 250M locations and contributions from over 300M Local Guides. Users can search for specific recommendations, and the AI will generate suggestions based on their preferences. It’s currently being rolled out in the US. (Source)
Amazon launches an AI shopping assistant for product recommendations
Amazon has launched an AI-powered shopping assistant called Rufus in its mobile app. Rufus is trained on Amazon’s product catalog and information from the web, allowing customers to chat with it to get help with finding products, comparing them, and getting recommendations.
The AI assistant will initially be available in beta to select US customers, with plans to expand to more users in the coming weeks. Customers can type or speak their questions into the chat dialog box, and Rufus will provide answers based on their training.
Why does this matter?
Rufus can save time and effort compared to traditional search and browsing. However, the quality of responses remains to be seen. For Amazon, this positions them at the forefront of leveraging AI to enhance the shopping experience. If effective, Rufus could increase customer engagement on Amazon and drive more sales. It also sets them apart from competitors.
Meta to deploy custom in-house chips to reduce dependence on costly NVIDIA
Meta plans to deploy a new version of its custom chip aimed at supporting its AI push in its data centers this year, according to an internal company document. The chip, a second generation of Meta’s in-house silicon line, could help reduce the company’s dependence on Nvidia chips and control the costs associated with running AI workloads. The chip will work in coordination with commercially available graphics processing units (GPUs).
Why does this matter?
Meta’s deployment of its own chip could potentially save hundreds of millions of dollars in annual energy costs and billions in chip purchasing costs. It also gives them more control over the core hardware for their AI systems versus relying on vendors.
The Biden administration plans to use the Defense Production Act to force tech companies to inform the government when they train AI models above a compute threshold.
Between the lines:
These actions are one of the first implementations of the broad AI Executive Order passed last year. In the coming months, more provisions from the EO will come into effect.
OpenAI and Google will likely need to disclose training details for the successors to GPT-4 and Gemini. The compute thresholds are still a pretty murky area – it’s unclear exactly when companies need to involve the government.
And while the EO was a direct response from the executive branch, Senators on both sides of the aisle are eager to take action on AI (and Big Tech more broadly).
Elsewhere in AI regulation:
Bipartisan senators unveil the DEFIANCE Act, which would federally criminalize deepfake porn, in the wake of Taylor Swift’s viral AI images.
The FCC wants to officially recognize AI-generated voices as “artificial,” which would make AI-powered robocalls illegal.
And a look at the US Copyright Office, which plans to release three very consequential reports this year on AI and copyright law.
What Else Is Happening in AI on February 02nd, 2024
The Arc browser is incorporating AI to improve bookmarks and search results
The new features in Arc for Mac and Windows include “Instant Links,” which allows users to skip search engines and directly ask the AI bot for specific links. Another feature, called Live Folders, will provide live-updating streams of data from various sources. (Link)
The Allen Institute for AI is open-sourcing its text-generating AI models
The model is OLMo, along with the dataset used to train them. These models are designed to be more “open” than others, allowing developers to use them freely for training, experimentation, and commercialization. (Link)
Apple CEO Tim Cook confirmed that AI features are coming ‘later this year’
This aligns with reports that iOS 18 could be the biggest update in the operating system’s history. Apple’s integration of AI into its software platforms, including iOS, iPadOS, and macOS, is expected to include advanced photo manipulation and word processing enhancements. This announcement suggests that Apple has ambitious plans to compete with Google and Samsung in the AI space. (Link)
Scientists use AI to create an early diagnostic test for ovarian cancer
Researchers at the Georgia Tech Integrated Cancer Research Center have developed a new test for ovarian cancer using AI and blood metabolite information. The test has shown 93% accuracy in detecting ovarian cancer in samples from the study group, outperforming existing tests. They have also developed a personalized approach to ovarian cancer diagnosis, using a patient’s individual metabolic profile to determine the probability of the disease’s presence. (Link)
Anthropic launches a new ‘dark mode’ visual option for its Claude chatbot. (Link)
Just click on the Profile > Appearance > Select Dark.
Meta’s plans to crush Google and Microsoft in AI
Mark Zuckerberg announced Meta’s intent to aggressively enter the AI market, aiming to outpace Microsoft and Google by leveraging the vast amount of data on its platforms.
Meta plans to make an ambitious long-term investment in AI, estimated to cost over $30 billion yearly, on top of its existing expenses.
The company’s strategy includes building advanced AI products and services for users of Instagram and WhatsApp, focusing on achieving general intelligence (AGI).
Tim Cook says big Apple AI announcement is coming later this year
Apple CEO Tim Cook confirmed that generative AI software features are expected to be released to customers later this year, during Apple’s quarterly earnings call.
The upcoming generative AI features are anticipated to be part of what could be the “biggest update” in iOS history, according to Bloomberg’s Mark Gurman.
Tim Cook emphasized Apple’s commitment to not disclose too much before the actual release but hinted at significant advancements in AI, including applications in iOS, iPadOS, and macOS.
Meta plans new in-house AI chip ‘Artemis’
Meta is set to deploy its new AI chip “Artemis” to reduce dependence on Nvidia chips, aiming for cost savings and enhanced computing to power AI-driven experiences.
By developing in-house AI silicon like Artemis, Meta aims to save on energy and chip costs while maintaining a competitive edge in AI technologies against rivals.
The Artemis chip is focused on inference processes, complementing the GPUs Meta uses, with plans for a broader in-house AI silicon project to support its computational needs.
Google’s Bard gets a free AI image generator to compete with ChatGPT
Google introduced a free image generation feature to Bard, using Imagen 2, to create images from text, offering competition to OpenAI’s multimodal chatbots like ChatGPT.
The feature introduces a watermark for AI-generated images and implements safeguards against creating images of known people or explicit content, but it’s not available in the EU, Switzerland, and the UK.
Bard with Gemini Pro has expanded to over 40 languages and 230 countries, and Google is also integrating Imagen 2 into its products and making it available for developers via Google Cloud Vertex AI.
Former CIA hacker sentenced to 40 years in prison
Joshua Schulte, a former CIA software engineer, was sentenced to 40 years in prison for passing classified information to WikiLeaks, marking the most damaging disclosure of classified information in U.S. history.
The information leaked, known as the Vault 7 release in 2017, exposed CIA’s hacking tools and methods, including techniques for spying on smartphones and converting internet-connected TVs into listening devices.
Schulte’s actions have been described as causing exceptionally grave harm to U.S. national security by severely compromising CIA’s operational capabilities and putting both personnel and intelligence missions at risk.
A Daily Chronicle of AI Innovations in February 2024 – Day 01: AI Daily News – February 01st, 2024
Shopify boosts its commerce platform with AI enhancements
Shopify unveiled over 100 new updates to its commerce platform, with AI emerging as a key theme. The new AI-powered capabilities are aimed at helping merchants work smarter, sell more, and create better customer experiences.
The headline feature is Shopify Magic, which applies different AI models to assist merchants in various ways. This includes automatically generating product descriptions, FAQ pages, and other marketing copy. Early tests showed Magic can create SEO-optimized text in seconds versus the minutes typically required to write high-converting product blurbs.
On the marketing front, Shopify is infusing its Audiences ad targeting tool with more AI to optimize campaign performance. Its new semantic search capability better understands search intent using natural language processing.
Why does this matter?
The AI advancements could provide Shopify an edge over rivals. In addition, the new features will help merchants capitalize on the ongoing boom in online commerce and attract more customers across different channels and markets. This also reflects broader trends in retail and e-commerce, where AI is transforming everything from supply chains to customer service.
OpenAI explores how good GPT-4 is at creating bioweapons
OpenAI is developing a blueprint for evaluating the risk that a large language model (LLM) could aid someone in creating a biological threat.
In an evaluation involving both biology experts and students, it found that GPT-4 provides at most a mild uplift in biological threat creation accuracy. While this uplift is not large enough to be conclusive, the finding is a starting point for continued research and community deliberation.
Why does this matter?
LLMs could accelerate the development of bioweapons or make them accessible to more people. OpenAI is working on an early warning system that could serve as a “tripwire” for potential misuse and development of biological weapons.
LLaVA-1.6: Improved reasoning, OCR, and world knowledge
LLaVA-1.6 releases with improved reasoning, OCR, and world knowledge. It even exceeds Gemini Pro on several benchmarks. Compared with LLaVA-1.5, LLaVA-1.6 has several improvements:
Increasing the input image resolution to 4x more pixels.
Better visual reasoning and OCR capability with an improved visual instruction tuning data mixture.
Better visual conversation for more scenarios, covering different applications. Better world knowledge and logical reasoning.
Efficient deployment and inference with SGLang.
Along with performance improvements, LLaVA-1.6 maintains the minimalist design and data efficiency of LLaVA-1.5. The largest 34B variant finishes training in ~1 day with 32 A100s.
Why does this matter?
LLaVA-1.6 is an upgrade to LLaVA-1.5, which has a simple and efficient design and great performance akin to GPT-4V.. LLaVA-1.5 has since served as the foundation of many comprehensive studies of data, models, and capabilities of large multimodal models (LMM) and has enabled various new applications. It shows the growing open-source AI community with fast-moving and freewheeling standards.
The uncomfortable truth about AI’s impact on the workforce is playing out inside the big AI companies themselves.
The article discusses how the increasing investment in AI by tech giants like Microsoft and Google is affecting the global workforce. It highlights that these companies are slowing hiring in non-AI areas and, in some cases, cutting jobs in those divisions as they ramp up spending on AI. For example, Alphabet’s workforce decreased from over 190,000 employees in 2022 to around 182,000 at the end of 2023, with further layoffs in 2024. The article emphasizes that the integration of AI has raised concerns about job displacement and the need for a workforce strategy that integrates AI and keeps jobs through the modification of roles. It also mentions the importance of being adaptable and learning about the new wave of jobs that may emerge due to technological advances. The impact of AI on different types of jobs, including white-collar and high-paid positions, is also discussed
The article provides insights into how the adoption of AI by major tech companies is reshaping the workforce and the potential implications for job stability and creation. It underscores the need for a proactive workforce strategy to integrate AI and mitigate job displacement, emphasizing the importance of adaptability and learning to navigate the evolving job market. The discussion on the impact of AI on different types of jobs, including high-paid white-collar positions, offers a comprehensive view of the challenges and opportunities associated with AI integration in the workforce.
Cisco’s head of security thinks that we’re headed into an AI phishing nightmare
The article discusses the potential impact of AI on cybersecurity, particularly in the context of phishing attacks. Jeetu Patel, Cisco’s executive vice president and general manager of security and collaboration, expresses concerns about the increasing sophistication of phishing scams facilitated by generative AI tools. These tools can produce written work that is challenging for humans to detect, making it easier for attackers to create convincing email traps. Patel emphasizes that this trend could make it harder for individuals to distinguish between legitimate activity and malicious attacks, posing a significant challenge for cybersecurity. The article highlights the potential implications of AI advancement for cybersecurity and the need for proactive measures to address these emerging threats.
1
The article provides insights into the growing concern about the potential misuse of AI in the context of cybersecurity, specifically in relation to phishing attacks. It underscores the need for heightened awareness and proactive strategies to counter the increasing sophistication of AI-enabled cyber threats. The concerns raised by Cisco’s head of security shed light on the evolving nature of cybersecurity challenges in the face of advancing AI technology, emphasizing the importance of staying ahead of potential threats and vulnerabilities.
What Else Is Happening in AI on February 01st, 2024
Microsoft LASERs away LLM inaccuracies.
Microsoft Research introduces Layer-Selective Rank Reduction (or LASER). While the method seems counterintuitive, it makes models trained on large amounts of data smaller and more accurate. With LASER, researchers can “intervene” and replace one weight matrix with an approximate smaller one. (Link)
Mistral CEO confirms ‘leak’ of new open source model nearing GPT-4 performance.
A user with the handle “Miqu Dev” posted a set of files on HuggingFace that together comprised a seemingly new open-source LLM labeled “miqu-1-70b.” Mistral co-founder and CEO Arthur Mensch took to X to clarify and confirm. Some X users also shared what appeared to be its exceptionally high performance at common LLM tasks, approaching OpenAI’s GPT-4 on the EQ-Bench. (Link)
Synthesia launches LLM-powered assistant to turn any text file or link into AI video.
Synthesia launched a tool to turn text-based sources into full-fledged synthetic videos in minutes. It builds on Synthesia’s existing offerings and can work with any document or web link, making it easier for enterprise teams to create videos for internal and external use cases. (Link)
AI is helping pick what you’ll wear in two years.
Fashion forecasters are leveraging AI to make decisions about the trends and styles you’ll be scrambling to wear. A McKinsey survey found that 73% of fashion executives said GenAI will be a business priority next year. AI predicts trends by scraping social media, evaluating runway looks, analyzing search data, and generating images. (Link)
Twin Labs automates repetitive tasks by letting AI take over your mouse cursor.
Paris-based startup Twin Labs wants to build an automation product for repetitive tasks, but what’s interesting is how they’re doing it. The company relies on models like GPT-4V) to replicate what humans usually do. Twin Labs is more like a web browser. The tool can automatically load web pages, click on buttons, and enter text. (Link)
SpaceX signs deal to launch private space station Link
Starlab Space has chosen SpaceX’s Starship megarocket to launch its large and heavy space station, Starlab, into orbit, aiming for a launch in a single flight.
Starlab, a venture between Voyager Space and Airbus, is designed to be fully operational from a single launch without the need for space assembly, targeting a 2028 operational date.
The space station will serve various users including space agencies, researchers, and companies, with SpaceX’s Starship being the only current launch vehicle capable of handling its size and weight.
Mistral CEO confirms ‘leak’ of new open source AI model nearing GPT-4 performance. Link
Mistral’s CEO Arthur Mensch confirmed that an ‘over-enthusiastic employee’ from an early access customer leaked a quantized and watermarked version of an old model, hinting at Mistral’s ongoing development of a new AI model nearing GPT-4’s performance.
The leaked model, labeled “miqu-1-70b,” was shared on HuggingFace and 4chan, attracting attention for its high performance on common language model benchmarks, leading to speculation it might be a new Mistral model.
Despite the leak, Mensch hinted at further advancements with Mistral’s AI models, suggesting the company is close to matching or even exceeding GPT-4’s performance with upcoming versions.
OpenAI says GPT-4 poses little risk of helping create bioweapons Link
OpenAI released a study indicating that GPT-4 poses at most slight risk in assisting in the creation of a bioweapon, according to their conducted research involving biology experts and students.
The study, motivated by concerns highlighted in President Biden’s AI Executive Order, aimed to reassure that while GPT-4 may slightly facilitate the creation of bioweapons, the impact is not statistically significant.
In experiments with 100 participants, GPT-4 marginally improved the ability to plan a bioweapon, with biology experts showing an 8.8% increase in plan accuracy, underscoring the need for further research on AI’s potential risks.
Microsoft, OpenAI to invest $500 million in AI robotics startup Link
Microsoft and OpenAI are leading a funding round to invest $500 million in Figure AI, a robotics startup competing with Tesla’s Optimus.
Figure AI, known for its commercial autonomous humanoid robot, could reach a valuation of $1.9 billion with this investment.
The startup, which partnered with BMW for deploying its robots, aims to address labor shortages and increase productivity through automation.
Tech startup Prophetic introduced Halo, an AI-powered headband designed to induce lucid dreams, allowing wearers to control their dream experiences.
Prophetic is seeking beta users, particularly from previous lucid dream studies, to help create a large EEG dataset to refine Halo’s effectiveness in inducing lucid dreams.
Interested individuals can reserve the Halo headband with a $100 deposit, leading towards an estimated price of $2,000, with shipments expected in winter 2025.
The latest, weirdest way to play Doom involves using genetically modified E. coli bacteria, as explored in a paper by MIT’s Media Lab PhD student Lauren “Ren” Ramlan.
Ramlan’s method doesn’t turn E. coli into a computer but uses the bacteria’s ability to fluoresce as pixels on an organic screen to display Doom screenshots.
Although innovative, the process is impractical for gameplay, with the organic display managing only 2.5 frames in 24 hours, amounting to a game speed of 0.00003 FPS.
How to generate a PowerPoint in seconds with Copilot
I want to learn AI, but I don't know the best way to do it because I am currently a beginner in Python and SQL, which I studied in college. I also studied math and statistics in college. Can anyone suggest how I can develop my skills and advance in this field? submitted by /u/Own_Hawk2887 [link] [comments]
Hi, can active recommend a video tool that specifically can handle the following. I have two cartoon characters. I have 1,000 scripts already created with sone instructions for charachter facial expressions, arm gestures, walking etc Want to create 1,000 videos (can do one at a time but batches preferred) Video of 1 minute each ai generated from the upload of the two character images and a text file that includes instructions for expressions, actions such as walking.. and script of dialogue. Text to speak has to be multiple languages - japanese.. korean.. malay etc Auto captions of the speech and other captions we might want to add I have developed that can integrate with an API if needed and can format script and instructions into whatever file format needed etc. Basically animation from i struction and script created automatically Any advice welcome. It’s for social enterprise and free service. submitted by /u/Far_Incident_8999 [link] [comments]
In a world awash with AI-generated deepfakes, we’re stepping into what can only be described as the “Deep Doubt” era. Remember when you could look at a photo or video and believe it captured reality? Well, those days are fading fast, as AI tools are making it easier to create convincing fakes. But the real twist? Now, anyone can dismiss genuine evidence as just another AI trick. Whether it’s a viral image, a political rally, or even a historical record, it’s becoming harder to know what’s real anymore. What Is “Deep Doubt” Anyway? Deep doubt is the new wave of skepticism bred by the rise of AI-generated media, especially deepfakes. With AI able to create photos, videos, and even voices that mimic reality, people are starting to question everything—even legitimate, fact-based media. This dangerous trend lets people more easily claim that things didn’t happen, pointing to AI as a scapegoat. For instance, in recent months, former President Donald Trump claimed an image with E. Jean Carroll (the writer who sued him for sexual assault) was an AI fabrication. Spoiler: it wasn’t. The “Liar’s Dividend”: How Doubt Became a Tool Back in 2019, legal scholars Danielle Citron and Robert Chesney coined the term "liar's dividend." It’s a neat way of saying: "If everyone thinks something could be fake, liars can claim everything is fake." Simple, right? In practice, though, it's much more alarming. Think about a politician caught on camera saying something incriminating—well, now they can just shrug and say, "It’s probably AI." And a surprising number of people will believe it. We’re seeing this play out everywhere. For example, conspiracy theorists have used AI to push wild claims that public figures—like President Joe Biden—are holograms or digitally created. If you thought your aunt’s Facebook feed was bad before, it’s about to get worse. How We Got Here: A History of Manipulated Media Manipulating media isn’t new. Even before Photoshop, photographers would crop, frame, and manipulate images. It’s just that AI has made it so much easier. Thanks to AI deepfakes, a high schooler in their basement can now create a video of your favorite actor doing or saying just about anything. All they need is a few hours and the right tools. And with each new breakthrough in AI tech, these fakes get more lifelike—and the lies more believable. The Rise of Deep Doubt in Politics and Law It’s not just social media users causing chaos. The real implications of deep doubt hit when it infects more serious areas, like politics and the legal system. Federal judges are already nervous about deepfakes being used to introduce fake evidence into trials or to undermine real evidence. Imagine trying to convict someone based on video footage, only to have the defense argue, "That’s not real; it’s AI." This uncertainty stretches beyond the courtroom. It’s seeping into how we talk about history. Could AI-manipulated media rewrite our understanding of past events? With deepfakes, historical photos and footage—once seen as ironclad proof—could be called into question. If we can’t trust our records, where does that leave us? The Social Fallout: Trust is on Life Support Our trust in online content is eroding fast, and deep doubt is driving us further down the rabbit hole. Enter the “Dead Internet Theory,” a growing conspiracy that claims most online interactions are generated by AI and bots. The fact that it’s so easy to imagine this as true shows just how deep doubt has spread. What’s next? If we stop trusting what we see online, social trust could unravel. It’s already happening as conspiracy theories fester in spaces where AI deepfakes and misinformation thrive. Even tech giants like Microsoft have raised alarms about a potential “post-epistemic world”—a world where fact and fiction blur so much that no one can tell them apart. How to Fight Deep Doubt: Context, Credibility, and Common Sense So, how do we combat deep doubt in this AI-warped world? The key lies in context. Just as a historian would verify a centuries-old parchment by cross-referencing sources, we must do the same with digital media today. Verifying a suspicious video or image means looking for multiple corroborating sources—different angles of the event, original reporting, and reputable eyewitness accounts. It's not enough to trust a single screenshot or viral post. Relying on trustworthy, verified accounts is our best shot at discerning truth from AI-generated fiction. Even though AI detection tools exist, they aren’t foolproof and can often give false results. In the end, good old-fashioned critical thinking, combined with credible sourcing, remains our best defense against deep doubt. Conclusion: Welcome to the New Normal (Unfortunately) As the line between real and fake continues to blur, we’re all going to need to be a little more skeptical—and a lot more vigilant. AI has opened Pandora’s box, and now anyone can weaponize deep doubt to sow confusion, undermine the truth, and erode social trust. The only question left is: How long until we stop believing anything we see online? submitted by /u/These-Salary-9215 [link] [comments]
Generative AI (Gen AI) is a cutting-edge form of artificial intelligence (AI) that can generate new data like images, text, or even code. Unlike traditional AI used for analysis, Gen AI delves deeper to generate new insights and possibilities. Learn in detail about Generative AI and its techniques, application, and impact on businesses. Ever wondered how Baymax scans the body effortlessly, providing the vital stats and diagnosis of the person with preventive measures for any medical issue? Well, it will be considered an AI-generated response. We have not yet reached that stage in AI and healthcare, however, Gen AI is making a lot of movements in the healthcare sector by enabling more accurate diagnoses, personalized treatments, and accelerated drug discovery with healthcare AI companies taking the lead. Read: https://www.seaflux.tech/blogs/genai-impact-in-healthcare-industry submitted by /u/Sandyrocks77 [link] [comments]
How are the latest advancements in AI changing the way digital assistants provide personalized experiences? What specific features do you find most impactful? submitted by /u/CLEVERTONE1 [link] [comments]
Have you heard the buzz about NPUs? If not, you’re about to—because neural processing units (NPUs) are set to become a game-changer for personal computers. You’ve heard of CPUs and GPUs, but NPUs are the next big thing in hardware, and soon, most PCs will have them. But what exactly are they, and why are they so important? NPU 101: What is it? NPU stands for Neural Processing Unit. It’s a specialized processor designed to handle AI and machine learning tasks much more efficiently than traditional CPUs or GPUs. Think of NPUs as AI accelerators—they boost the performance of AI features without draining your battery or heating up your device. How Do NPUs Work? AI tasks, like image recognition or language processing, rely on something called neural networks. NPUs are optimized to process these networks faster and with less power consumption. While GPUs are great at AI tasks, they require lots of energy. NPUs, on the other hand, hit the sweet spot by offering a balance between performance and efficiency. NPU vs. CPU vs. GPU: The Breakdown CPU (Central Processing Unit): Your PC’s brain, handling general tasks. GPU (Graphics Processing Unit): Focused on handling complex graphics and gaming, but also useful for AI work. NPU (Neural Processing Unit): The AI specialist, running tasks like voice assistants, image processing, and more—without hogging system resources or battery life. Why Do NPUs Matter? If you’ve ever run AI tools like ChatGPT or Stable Diffusion on your PC, you might have noticed they’re power-hungry and slow down your machine. NPUs can run these tasks faster and more efficiently. Plus, they free up your CPU and GPU for other tasks, meaning your entire system runs smoother. NPUs in Action: What Can They Do? Windows PCs with NPUs can run AI-powered features, like Microsoft’s Windows Studio Effects (think background blurring on video calls or automatic eye contact correction). In the future, more apps will take advantage of NPUs, offering smoother AI-enhanced experiences—from audio and video editing to gaming. What’s the Catch? Right now, not every PC has an NPU. But brands like Intel, AMD, and Qualcomm are starting to pack them into their newest chips. So, if you're shopping for a new laptop, it’s something to keep an eye on. Should You Get a PC with an NPU Now? It’s tempting to future-proof your device, but NPUs are still in their early stages. For most people, they’re a nice-to-have, not a need-to-have—yet. Many AI tools still run on the cloud, meaning your current hardware will work fine. But if you're looking to stay ahead of the curve, picking up a PC with an NPU could be a smart move for AI-heavy tasks. The AI revolution is just getting started, and NPUs will be at the heart of it. Whether you’re a gamer, creator, or just want a smoother PC experience, NPUs are set to make everything faster, smarter, and more efficient. Note: i used chatgpt to rewrite and make it better due to lack of my English skills submitted by /u/These-Salary-9215 [link] [comments]
By bringing together researchers, practitioners, and nonprofit organizations, G42 and Microsoft are creating a collaborative environment for developing best practices in responsible AI. This partnership is a step toward global AI governance. With endorsement from the UAE’s AIATC and support from both the US and UAE governments, this partnership has significant geopolitical implications. It shows how AI can be regulated and advanced responsibly at an international level. Could this partnership influence global AI policy and regulation? source link: https://www.mediaoffice.abudhabi/en/technology/g42-and-microsoft-to-establish-two-research-centres-in-abu-dhabi-to-advance-development-of-responsible-artificial-intelligence/ submitted by /u/jealousrooster88 [link] [comments]
I want to create floor plans in bulk. If possible with 1 click 🙂 Also a 3d render would be great. Is there an AI app that can do that? I don't need anything professional. It's more important for me to do them really quickly and in bulk. submitted by /u/vovr [link] [comments]
Apologies if I'm not using the correct terms, I just started to get into AI and I'm not super-knowledgable at the moment. This came to mind the other day - I have this video that's otherwise good but it was saved at a lower resolution than it should have been, so it lacks detail in some areas. (You know how if you send someone a video via email on the iPhone it prompts you to send it in a lower resolution? It's that kind of situation.) It occurred to me that maybe an AI could analyze the video and fill in the gaps with generative AI. Here's my logic. Let's say you have a low-quality photo of a wooden fence. You can tell it's a wooden fence (and if you asked an AI they would say it's a wooden fence) but it lacks those finer details like wood grain. So you feed it into the AI, it says "ah, that's a wooden fence and wooden fences have wood grain" and then fills in the grain because that's what it would expect. It uses the photo or image as a baseline and only makes minor changes rather than altering the whole thing, like it might if you said "redo this image in cartoon style" or something like that. I played around with this concept in DiffusionBee but the AI wasn't picking up what I was putting down. Depending on settings like prompt strength, it would either not do anything or give me what was almost a whole new image inspired by the base image. I'm no AI expert so maybe I wasn't using the right search terms, but I did some Googling and it doesn't seem like there's an easy way to do this. Other image tools like Topaz kind of helped but clearly it's not doing that "generative" aspect like I was referring to. It's also a bit hard to tell how tools like Topaz actually work, because it's kind of a black box and obviously they aren't going to spill the beans on their proprietary method. Is there a reason why this is so hard? It seems like it might be an obvious solution to improving video or images that are somewhat lower quality. submitted by /u/UnfoldedHeart [link] [comments]
Unraveling August 2023: Spotlight on Generative AI, Tech, Sports and the Month’s Hottest Trends.
Welcome to the hub of the most intriguing and newsworthy trends of August 2023! In this era of rapid development, we know it’s hard to keep up with the ever-changing world of ai, technology, sports, entertainment, and global events. That’s why we’ve curated this one-stop blog post to provide a comprehensive overview of what’s making headlines and shaping conversations. From the mind-bending advancements in artificial intelligence to captivating news from the world of sports and entertainment, we’ll guide you through the highlights of the month. So sit back, get comfortable, and join us as we dive into the core of August 2023!
OpenAI has released a guide for teachers using ChatGPT in their classroom. This guide includes suggested prompts, explanations about ChatGPT’s functionality and limitations, as well as insights into AI detectors and bias.
The company also highlights stories of educators successfully using ChatGPT to enhance student learning and provides prompts to help teachers get started. Additionally, their FAQ section offers further resources and answers to common questions about teaching with and about AI.
OpenAI’s teaching with AI empowers teachers with resources and insights to effectively use ChatGPT in classrooms, benefiting students’ learning experiences. While Competitors like Bard, Bing, and Claude may face pressure to offer similar comprehensive guidance to educators. Failing to do so could put them at a disadvantage in the increasingly competitive AI education market.
Meta announced 2 new AI updates: DINOv2, FACET (FAirness in Computer Vision Evaluation)
Meta has announced the commercial relicensing and expansion of DINOv2, a computer vision model, under the Apache 2.0 license to give developers and researchers more flexibility for downstream tasks.
Meta also introduces FACET (FAirness in Computer Vision Evaluation), a benchmark for evaluating the fairness of computer vision models in tasks such as classification and segmentation. The dataset includes 32,000 images of 50,000 people, with demographic attributes such as perceived gender age group, and physical features.
Why does this matter?
FACET ensures more equitable experiences when interacting with computer vision technology, reducing the risk of bias based on demographics. On the other hand, DINOv2’s availability under the Apache 2.0 license as it empowers developers and researchers to create more versatile computer vision applications.
The Graph of Thoughts (GoT) framework improves the capabilities of LLMs by modeling information as a graph. LLM thoughts are represented as vertices, and edges represent dependencies between these thoughts. GoT allows for combining thoughts, distilling networks of thoughts, and enhancing thoughts using feedback loops.
It outperforms other paradigms like Chain-of-Thought or Tree of Thoughts (ToT) in various tasks, increasing sorting quality by 62% and reducing costs by over 31%. It is also extensible, allowing for new thought transformations and advancing prompting schemes.
This advancement brings LLM reasoning closer to human thinking and brain mechanisms such as recurrence, both of which form complex networks. It makes AI models more versatile and adaptable, with implications on various domains.
Google announced a slew of massive AI updates at the Google Cloud Next 2023 event. Here are some key announcements:
Vertex AI extends enterprise-ready generative AI development with new models and tooling. Google Cloud gets a curated collection of models across first-party, open-source, and third-party models, including Meta’s Llama 2 and Code Llama, Falcon, Anthropic’s Claude 2, and more. Google’s foundation models– PaLM, Codey, and Imagen– also get several updates.
Powered by DeepMind, a new tool called SynthID helps watermark and identify synthetic images created by Imagen.
Google is expanding its AI-optimized infrastructure with the general availability of Cloud TPU v5e and Nvidia-powered A3 VMs.
Duet AI in Workspace (aiding tasks across meetings, documents, Google Chat, Gmail, and more) is now generally available, and Duet AI in Google Cloud (to assist in code refactoring, improving, etc.) is expanding its preview and will be generally available later this year
.
Duet AI in Google Cloud also includes advancements for software development, application infrastructure and operations, data analytics, accelerating and modernizing databases, and security operations.
Search Generative Experience (SGE) launches in the first countries outside the U.S. — India and Japan (with multilingual and local language support).
Why does it matter?
The advancements seem to offer a complete solution for AI, from computing infrastructure to end-to-end software and services that support the full lifecycle of model training, tuning, and serving at global scale. It will help organizations harness the full potential of AI with data and cloud through a unified foundation.
Introducing Zapier AI Chatbot: Create custom AI chatbots with no code
Now you can build your own AI-powered chatbot through Zapier Interfaces, its no-code, automation-powered app builder currently in beta. You also have a variety of sharing options, so you can embed chatbots on your website or limit access to your team or external stakeholders.
The base AI Chatbot model is GPT-3.5. With Interfaces Premium, you can connect to other models (like GPT-4) using an API key from your personal OpenAI account.
Why does this matter?
This makes it easier for businesses and individuals to create custom AI chatbots, no coding required. It democratizes AI chatbot development, potentially increasing their accessibility across various industries/applications and fostering innovation in AI.
Meta researchers find AI “Déjà Vu”ing: Suggested ways to address the privacy risks; Meta’s ImageBind: The ultimate fusion of 6 data types in 1 AI model; Meta’s Sandbox: Where AI meets advertising; Meta bets big on AI with custom chips & a supercomputer; Meta scaling Speech Technology to 1,100+ languages; Meta’s MusicGen: The LLaMA moment for music AI; Meta disclosed AI behind Facebook and Instagram recommendations; Meta merges ChatGPT & Midjourney into one; Meta unveils Llama 2, a worthy rival to ChatGPT; Meta-Transformer lets AI models process 12 modalities; Meta collabs with Qualcomm to enable on-device AI apps using Llama 2; Meta’s AudioCraft is AudioGen + MusicGen + EnCodec; Meta challenges OpenAI with code-gen free software; Meta’s SeamlessM4T: The first all-in-one, multilingual multimodal AI; Meta to rival GPT-4 with a free Llama 3?
Meta researchers find AI “Déjà Vu”ing: Suggested ways to address the privacy risks
Researchers at Meta recently discovered an anomaly common across most Self Supervised Learning (SSL) algorithms and call it Déjà Vu. They said SSL models can unintendedly memorize specific parts in individual training samples rather than learning semantically meaningful associations.
The report shares the details of studies around this unintended memorization and also explores ways of avoiding it.
Meta’s ImageBind: The ultimate fusion of 6 data types in 1 AI model
Meta has announced the new open-source AI model called ‘ImageBind’ that links together multiple data streams- text, audio, visual data, temperature, and movement readings. ImageBind is the first to combine 6 data types into a single embedding space.
The company also notes that other streams of sensory input could be added to future models, including touch, speech, smell, and brain fMRI signals.
Meta’s Sandbox: Where AI meets advertising
Meta has introduced an AI Sandbox for advertisers, which includes features such as alternative copy generation, background creation through text prompts, and image cropping for Facebook or Instagram ads. This new tool aims to assist advertisers in creating more diverse and engaging content using AI.
The tools are still in beta, but they have the potential to revolutionize how ads are created and delivered.
Meta bets big on AI with custom chips & a supercomputer
Meta is making a big bet on AI by developing custom chips and a supercomputer. The company is developing its own chips called the Meta Training and Inference Accelerator (MTIA), which will be optimized for AI workloads and allow for more efficient training and running of complex models.
In addition, Meta is building a supercomputer, which will be used to train large-scale AI models for natural language processing and computer vision. These investments aim to enable the development of more advanced products and services, such as virtual assistants and augmented reality applications.
Meta scaling Speech Technology to 1,100+ languages
Meta’s Massively Multilingual Speech (MMS) project aims to address the lack of speech recognition models for most of the world’s languages, introduced Introducing speech-to-text, text-to-speech. Combining self-supervised learning techniques with a new dataset containing labeled data for over 1,100 languages and unlabeled data for nearly 4,000 languages.
The MMS models outperform existing ones and cover 10 times as many languages. The project’s goal is to increase accessibility to information for people who rely on voice as their primary means of accessing information. The models and code are publicly available for further research and development. The project aims to contribute to the preservation of the world’s diverse languages.
Meta’s AI Segmentation Game Changer
Meta’s researchers have developed HQ-SAM (High-Quality Segment Anything Model), a new model that improves the segmentation capabilities of the existing SAM. SAM struggles to segment complex objects accurately, despite being trained with 1.1 billion masks. HQ-SAM is trained on a dataset of 44,000 fine-grained masks from various sources, achieving impressive results on nine segmentation datasets across different tasks.
HQ-SAM retains SAM’s prompt design, efficiency, and zero-shot generalizability while requiring minimal additional parameters and computation. Training HQ-SAM on the provided dataset takes only 4 hours on 8 GPUs.
Meta plans to put AI everywhere on its platforms
Meta has announced plans to integrate generative AI into its platforms, including Facebook, Instagram, WhatsApp, and Messenger. The company shared a sneak peek of AI tools it was building, including ChatGPT-like chatbots planned for Messenger and WhatsApp that could converse using different personas. It will also leverage its image generation model to let users modify images and create stickers via text prompts.
META released MusicGen, a controllable music generation model for producing high-quality music. MusicGen can be prompted by both text and melody.
The best thing is anyone can try it for free now. It uses a single-stage transformer language model with efficient token interleaving patterns, eliminating the need for multiple models.
MusicGen will generate 12 seconds of audio based on the description provided. You can optionally provide a reference audio from which a broad melody will be extracted. Then the model will try to follow both the description and melody provided. You can also use your own GPU or a Google Colab by following the instructions on their repo.
Meta’s new human-like AI model for image creation
Meta has introduced a new model, Image Joint Embedding Predictive Architecture (I-JEPA), based on Meta’s Chief AI Scientist Yann LeCun’s vision to make AI systems learn and reason like animals and humans. It is a self-supervised computer vision model that learns to understand the world by predicting it.
The core idea: It learns by creating an internal model of the outside world and comparing abstract representations of images. It uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models.
Captures patterns and structures through self-supervised learning from unlabeled data.
Predicts missing information at a high level of abstraction, avoiding generative model limitations
Delivers strong performance on multiple computer vision tasks while also being computationally efficient. Less data, less time, and less compute.
Can be used for many different applications without needing extensive fine-tuning and is highly scalable.
Meta’s all-in-one generative speech AI model
Meta introduces Voicebox, the first generative AI model that can perform various speech-generation tasks it was not specifically trained to accomplish with SoTA performance. It can perform:
Text-to-speech synthesis in 6 languages
Noise removal
Content editing
Cross-lingual style transfer
Diverse sample generation
One of the main limitations of existing speech synthesizers is that they can only be trained on data that has been prepared expressly for that task. Voicebox is built upon the Flow Matching model, which is Meta’s latest advancement on non-autoregressive generative models that can learn highly non-deterministic mapping between text and speech.
Meta disclosed AI behind Facebook and Instagram recommendations
Meta is sharing 22 system cards that explain how AI-powered recommender systems work across Facebook and Instagram. These cards contain information and actionable insights everyone can use to understand and customize their specific AI-powered experiences in Meta’s products.
Moreover, Meta also shared its top ten most important prediction models rather than everything in the system to not dive into much technical detail can sometimes obfuscate transparency.
Using an input audio sample of just two seconds in length, Voicebox can match the sample’s audio style and use it for text-to-speech generation.
Meta plans to dethrone OpenAI and Google
Meta plans to release a commercial AI model to compete with OpenAI, Microsoft, and Google. The model will generate language, code, and images. It might be an updated version of Meta’s LLaMA, which is currently only available under a research license.
Meta’s CEO, Mark Zuckerberg, has expressed the company’s intention to use the model for its own services and make it available to external parties. Safety is a significant focus. The new model will be open source, but Meta may reserve the right to license it commercially and provide additional services for fine-tuning with proprietary data.
Tesla is launching its highly-anticipated supercomputer today. The machine, employing 10,000 Nvidia H100 compute GPUs, will be used for various AI applications. It is said to be one of the most powerful machines in the world.
But NVIDIA is struggling to keep up with the GPU demand. Thus, Tesla is investing over $1B to develop its own supercomputer, Dojo, built on the company’s hyper-optimized custom-designed chip. Tesla is also activating Dojo simultaneously. Take a look at Tesla’s internal forecast for the compute power of Dojo.
Why does it matter?
Elon Musk recently revealed that Tesla plans to spend over $2B on AI training in 2023 and is hiring reputed AI engineers. But this move gives Tesla unparalleled compute power. It also underscores Tesla’s commitment to overcoming computational bottlenecks in AI and should provide substantial advantages over its rivals. Elon might be the next big thing in AI. What do you think?
OpenAI launches ChatGPT Enterprise, the most powerful ChatGPT version yet
Open has launched ChatGPT Enterprise, the most powerful version of ChatGPT yet. It offers enterprise-grade security and privacy, features for large-scale deployments, unlimited higher-speed GPT-4 access, 32K context for faster processing of longer inputs, advanced data analysis capabilities, customization options, and much more. OpenAI is also working on more features and will launch them soon.
Why does it matter?
This is a simple and safe way of deploying ChatGPT into core operations at organizations. It could be a solution for big companies that have banned ChatGPT at work over privacy concerns, like Apple, Amazon, Citigroup, and more. Maybe, this can pave the way for truly widespread adoption of AI in the business world.
Usage of ChatGPT among Americans rises, but only slightly
A recent survey conducted in July by Pew Research Center reveals 18% of U.S. adults have ever used ChatGPT. While 16% of those who have heard of the tool and are employed say they have used it for tasks at work.
The statistic is consistent with a similar survey conducted in March by the Pew Research Center that showed 14% of U.S. adults had tried ChatGPT. And about one in ten working adults who had heard of ChatGPT used it at work.
While this shows increased adoption of ChatGPT among Americans, it is not a significant one in the grand scheme of AI adoption today. In fact, only a few think it will have a major impact on their job.
Why does this matter?
These findings suggest AI’s penetration remains gradual. It is also clear that there is still work to be done in educating and acclimating the workforce to the benefits and implications of generative AI. Plus, given the lingering concerns and uncertainties about ChatGPT’s prowess, maybe it is too early to start worrying about AI replacing jobs.
What Else Is Happening in AI
Microsoft infuses AI with human-like reasoning via an “Algorithm of Thoughts”.
DoorDash launches AI-powered voice ordering to answer calls and curate recommendations.
Uber is working on an AI chatbot for its food delivery app.
Yahoo Mail introduces new AI-powered capabilities, including a ‘Shopping Saver’ tool.
Generative inbreeding, akin to inbreeding in genetics, is a concern as AI systems training on AI-generated content can degrade their performance and distort human culture.
Tesla’s $300M AI cluster is going live today – Tesla is launching its highly-anticipated supercomputer today. The machine, employing 10,000 Nvidia H100 compute GPUs, will be used for various AI applications. – But NVIDIA is struggling to keep up with the GPU demand. Thus, Tesla is investing over $1B to develop its own supercomputer, Dojo, built on the company’s hyper-optimized custom-designed chip. Tesla is also activating Dojo simultaneously.
OpenAI launches ChatGPT Enterprise, the most powerful ChatGPT version yet – It offers enterprise-grade security and privacy, features for large-scale deployments, unlimited higher-speed GPT-4 access, 32K context for faster processing of longer inputs, advanced data analysis capabilities, customization options, and much more. OpenAI is also working on more features and will launch them soon.
Usage of ChatGPT among Americans rises, but only slightly – A recent survey conducted in July by Pew Research Center reveals 18% of U.S. adults have ever used ChatGPT. While 16% of those who have heard of the tool and are employed say they have used it for tasks at work. The statistic is consistent with a similar survey conducted in March by the center. – While it shows increased adoption of ChatGPT among Americans, it is not a significant one in the grand scheme of AI adoption today. In fact, only a few think it will have a major impact on their job.
Microsoft infuses AI with human-like reasoning via an “Algorithm of Thoughts” – The technique guides the language model through a more streamlined problem-solving path. It utilizes in-context learning, enabling the model to explore different solutions in an organized manner systematically. The result? Faster, less resource-intensive problem-solving.
DoorDash launches AI-powered voice ordering service – It will answer calls and provide customers with curated recommendations.
Uber is working on an AI chatbot for its food delivery app – It will offer recommendations to food-delivery customers and help them more quickly place orders.
Yahoo Mail introduces new AI-powered capabilities – The rollout includes upgrades to several of Yahoo Mail’s existing AI features and introduces a new Shopping Saver tool.
Poe by Quora lets you use all the AI chatbots in one place – Its goal is to be the web browser for accessing AI chatbots, and it just got a bunch of updates.
IBM’s new analog AI chip challenges Nvidia
IBM has developed an analog AI chip that’s up to 14 times more energy-efficient than current digital chips, addressing the power-hungry nature of generative AI.
The analog chip’s ability to manipulate analog signals and its human brain-like operation could potentially challenge Nvidia’s dominance in AI hardware.
IBM’s prototype chip demonstrated significant energy efficiency gains, encoding millions of memory devices and modeling parameters while performing computations directly within memory.
AI’s promise and peril in cancer research
UK-based biotech startup Etcembly used generative AI to develop a novel immunotherapy targeting hard-to-treat cancers, demonstrating AI’s potential for medical advancements.
However, risks of AI in healthcare are evident, as a study reveals that AI-generated cancer treatment plans, like those from ChatGPT, contained factual errors and contradictory information.
While AI-powered tools hold promise, their clinical deployment without rigorous validation could lead to dangerous missteps, highlighting the importance of skepticism and human consultation.
Linkedin: Building soft (human) skills remains key in the age of AI
Summary: A new LinkedIn report reveals that AI skills are spreading quickly globally, with major growth in AI job postings and professionals adding AI abilities.
Job postings mentioning AI skills like GPT and ChatGPT have risen dramatically, with a 21x increase since November 2022.
LinkedIn members adding AI skills to profiles is accelerating globally. The number of members with AI skills was 9x larger in June 2023 compared to January 2016.
Singapore, Finland, Ireland, India and Canada have the fastest AI skills adoption rates based on LinkedIn’s AI Skills Index.
47% of US executives believe using generative AI will boost productivity. 40% think it will help drive revenue growth.
84% of US members have jobs that could use AI to automate at least 25% of repetitive tasks. This will also increase demand for people skills.
In the US, the fastest-growing in-demand skills since November 2022 are: Flexibility +158%, Professional ethics +120%, Social perceptiveness +118%, Self-management +83%.
Communication remains the top skill in demand in US job postings, with people skills like flexibility growing the fastest since ChatGPT launched.
92% of executives agree people skills are more important than ever in an AI-driven world.
Why It Matters: AI is transforming and disrupting every industry for sure, but it will never disrupt humanity. Human skills (also called soft skills) like creativity and emotional intelligence will only become more important.
YouTube and Universal Music Partner to Launch ‘AI Incubator’
YouTube is partnering with Universal Music to launch an incubator focused on exploring the use of AI in music. The incubator will work with artists and musicians, including Anitta, ABBA’s Björn Ulvaeus, and Max Ricther, to gather insights on generative AI experiments and research. YouTube CEO Neal Mohan stated that the incubator will inform the company’s approach as it collaborates with innovative artists, songwriters, and producers.
YouTube also plans to invest in AI-powered technology, including enhancing its copyright management tool, Content ID, to protect viewers and creators.
Why does this matter?
By partnering with renowned artists, the AI incubator explores the potential of AI-generated music, spotlighting the intersection of technology and artistry. This collab not only underscores AI’s growing role in creative industries but also demonstrates how industry giants can collaborate to drive innovation and shape the future of music production.
In the ever-evolving landscape of artificial intelligence, Large Language models (LLMs) like GPT-3/GPT-4/Claude-2 and others have exhibited astonishing capabilities across various domains, from mathematical problem-solving to creative writing. However, there’s been a limitation in their approach – the left-to-right, token-by-token decision-making process, which doesn’t always align with complex problem-solving scenarios that demand strategic planning and exploration.
But what if we could enable these LLMs to think more strategically, explore multiple reasoning paths, and evaluate the quality of their thoughts in a deliberate manner? Some researchers have created a framework called “Tree of Thoughts” (ToT) which aims to fix this by enhancing the problem-solving prowess of large language models.
The Essence of ToT
At its core, ToT reimagines the reasoning process as an intricate tree structure. Each branch of this tree represents an intermediate “thought” or a coherent chunk of text that serves as a crucial step toward reaching a solution. Think of it as a roadmap where each stop is a meaningful milestone in the journey towards problem resolution. For instance, in mathematical problem-solving, these thoughts could correspond to equations or strategies.
But ToT doesn’t stop there. It actively encourages the LM to generate multiple possible thoughts at each juncture, rather than sticking to a single sequential thought generation process, as seen in traditional chain-of-thought prompting. This flexibility allows the model to explore diverse reasoning paths and consider various options simultaneously.
Source: Yao et el. (2023)
The Power of Self-Evaluation
One of ToT’s defining features is the model’s ability to evaluate its own thoughts. It’s like having an inbuilt compass to assess the validity or likelihood of success for each thought. This self-evaluation provides a heuristic, a kind of mental scorecard, to guide the LM through its decision-making process. It helps the model distinguish between promising paths and those that may lead to dead ends.
Systematic Exploration
ToT takes strategic thinking up a notch by employing classic search algorithms such as breadth-first search or depth-first search to systematically explore the tree of thoughts. These algorithms allow the model to look ahead, backtrack when necessary, and branch out to consider different possibilities. It’s akin to a chess player contemplating multiple moves ahead before making a move.
Customizable and Adaptable
One of ToT’s strengths is its modularity. Every component, from thought representation to generation, evaluation, and search algorithm, can be customized to fit the specific problem at hand. No additional model training is needed, making it highly adaptable to various tasks.
Real-World Applications
The true litmus test for any AI framework is its practical applications. ToT has been put to the test across different challenges, including the Game of 24, Creative Writing, and Mini Crosswords. In each case, ToT significantly boosted the problem-solving capabilities of LLMs over standard prompting methods. For instance, in the Game of 24, success rates soared from a mere 4% with chain-of-thought prompting to an impressive 74% with ToT.
Source: Yao et el. (2023)
The above image is a visual representation of the Game of 24 which is a mathematical reasoning challenge where the goal is to use 4 input numbers and arithmetic operations to reach the target number 24.
The tree of thought (ToT) approach represents this as a search over possible intermediate equation “thoughts” that progressively simplify towards the final solution.
First, the language model proposes candidate thoughts that manipulate the inputs (e.g. (10 – 4)).
Next, it evaluates the promise of reaching 24 from each partial equation by estimating how close the current result is. Thoughts evaluated as impossible are pruned.
The process repeats, generating new thoughts conditioned on the remaining options, evaluating them, and pruning. This iterative search through the space of possible equations allows systematic reasoning.
For example, the model might first try (10 – 4), then build on this by proposing (6 x 13 – 9) which gets closer to 24. After several rounds of generation and evaluation, it finally produces a complete solution path like: (10 – 4) x (13 – 9) = 24.
By deliberating over multiple possible chains of reasoning, ToT allows more structured problem solving compared to solely prompting for the end solution.
Trained AI algorithms work by taking the input and providing the output without explaining its inner workings. XAI aims at pointing out the rationale behind any decision by AI in such a way that humans can interpret it.
Deep learning works with neural networks just like the human brain works with neurons, where it uses a massive amount of training data to learn and identify patterns. It would be very difficult, or rather impossible, to dig into the rationale behind Deep Learning’s decision. Decisions like credit card eligibility or loan sanction are quite important to be explained by XAI. However, a few wrong decisions would not impact much. Whereas, in the case of healthcare, as discussed earlier, a doctor could not provide the appropriate treatment without knowing the rationale behind AI’s decision. Surgery on the wrong organ could be fatal.
4 Principles of Explainable AI
The US National Institute of Standards and Technology has developed four principles as guidelines to adopt fundamental properties of Explainable Artificial Intelligence (XAI) efficiently and effectively. These principles apply individually and independently from each other and guide us to better understand the working of the AI models.
1. Explanation:
This principle obligates the AI to generate a comprehensive explanation for humans to understand the process of generating the decisions with the required evidence and reasons. The standard for this evidence and reasons is governed by the next three principles.
2. Meaningful:
This principle is satisfied when a stakeholder understands the explanation provided in the first guiding principle. The explanation should not be complex and understood by the users on a group as well as individual level.
3. Explanation Accuracy:
The accuracy at which the AI explains the complicated process of generating the output is critical. Accuracy metrics may differ for individual stakeholders in terms of their explanation. The expected accuracy is 100% for all the stakeholders to understand the logic.
4. Knowledge Limits:
The last principle of XAI explains that the model can only be operated under the special conditions it has been modeled for. It is expected to operate under its limited knowledge to avoid any sort of discrepancy or unjustified business outcomes.
How does XAI work?
These principles help us define the expected output from the XAI model and how an ideal XAI model should be. However, it doesn’t indicate how the output has been achieved. Subdividing the XAI into three categories to better understand the rationale:
1. Explainable data: What data is used to train the model? Why the particular data is selected? How much biased is the data?
2. Explainable predictions: What features did the model use that lead to the particular output?
3. Explainable algorithms: How is the model layered? How do these layers lead to the prediction?
Based on individual instances, the explainability may change. For example, the neural network can only be explained using the Expainable Data category. Research is ongoing that is focused on finding ways to explain the predictions and algorithms. At present there are two approaches:
a. Proxy Modeling:
A different model from the original is used to approximate the actual model. This may result in different outcomes from the true model outcomes, as it is just an approximation.
b. Design for Interpretability:
The actual model is designed in such a way that it is easy to understand its working. However, this increases the risk of reduced predictive power and overall accuracy of the model.
The XAI is referred to as the White Box, as it explains the rationale behind its working. However, unlike the black box, its accuracy may decrease in order to provide an explainable reason for its outcome. Decision trees, Bayesian networks, sparse linear models, and many more are used as explainable techniques. Hopefully, with the advancements in the field, new studies will come up to increase the accuracy of the explanations.
Critical Industries for XAI
XAI would be helpful in those industries where machines play a key part in decision-making. These use cases might also be useful in your industry, as the details may vary, but the core principles remain the same.
1. Healthcare in XAI
As discussed earlier, the decisions made by AI in healthcare impact humans in a very critical way. A machine with XAI would help the healthcare staff save a lot of time, which they might use to focus on treating and attending to more patients. For example, diagnosing a cancerous area and explaining the reason in a matter of time helps the doctor to provide appropriate treatment.
2. Manufacturing in XAI
In the manufacturing industry, fixing or repairing equipment often depends on personnel expertise, which may vary. To ensure a consistent repair process, XAI can help provide ways to repair a machine type with an explanation, record the feedback from the worker, and continuously learn to find the best process to be followed. The workers need to trust the decision made by the machine in order to risk working on the equipment repair, which is the reason XAI becomes useful.
3. Autonomous vehicles in XAI
A self-driving car seems great until and unless it has made a bad decision, which can be deadly. If an autonomous car faces an inevitable accident scenario, the decision it makes impacts greatly on its future use, whether it saves the driver or the pedestrians. Providing the rationale for each decision an autonomous car takes, helps to improve people’s security on the road.
Strategize Your Social Media Campaigns with ChatGPT
Try the propmpt below:
You are a social media strategist. I am launching a crowdfunding campaign for an innovative portable solar charger and need to create a buzz on social media. I need a comprehensive social media strategy that covers platform selection, content ideas, posting frequency, engagement tactics, and analytics tracking. Please provide suggestions considering the latest trends in social media marketing and the behavior of tech-savvy, environmentally-conscious consumers.
YouTube will pay artists and rights holders for AI-generated music used on the platform. This aims to balance creative innovation and fair compensation.
Unraveling August 2023: Spotlight on Generative AI, Tech, Sports and the Month’s Hottest Trends.
Welcome to the hub of the most intriguing and newsworthy trends of August 2023! In this era of rapid development, we know it’s hard to keep up with the ever-changing world of ai, technology, sports, entertainment, and global events. That’s why we’ve curated this one-stop blog post to provide a comprehensive overview of what’s making headlines and shaping conversations. From the mind-bending advancements in artificial intelligence to captivating news from the world of sports and entertainment, we’ll guide you through the highlights of the month. So sit back, get comfortable, and join us as we dive into the core of August 2023!
OpenCopilot allows you to have your own product’s AI copilot. With a few simple steps, it takes less than 5 minutes to build.
It integrates with your underlying APIs and can execute API calls whenever needed. It uses LLMs to determine if the user’s request requires calling an API endpoint. Then, it decides which endpoint to call and passes the appropriate payload based on the given API definition.
Why does this matter?
Shopify has an AI-powered sidekick, while Microsoft (Windows Copilot, Bing Copilot) and GitHub (GitHub Copilot) have copilots. The above innovation empowers every SaaS product to have its own AI copilots tailored for its unique products.
Google teaches LLMs to personalize
LLMs are already good at synthesizing text, but personalized text generation can unlock even more. New Google research has proposed an approach inspired by the practice of writing education for personalized text generation using LLMs. It has a multistage and multitask framework consisting of multiple stages: retrieval, ranking, summarization, synthesis, and generation.
In addition, they introduce a multitask setting that further helps the model improve its generation ability, which is inspired by the observation that a student’s reading proficiency and writing ability are often correlated. When evaluated on three public datasets, each covering a different and representative domain, the results showed significant improvements over various baselines.
Why does this matter?
Customizing style is essential for many domains like personal communication, dialogue, marketing copies, stories, etc., which is hard to do via pure prompt engineering or custom instructions. The research attempts to address this and highlights how we can take inspiration from how humans achieve tasks to apply it to LLMs.
Local Llama
For businesses, local LLMs offer competitive performance, cost reduction, dependability, and flexibility. This article by ScaleDown provides practical guidance on setting up and running LLMs locally using a user-friendly project.
Moreover, Llama-2 and its variants are the go-to models, and the community continually refines them. The article highlights some things to note when running Llama models locally, including memory and model loader challenges.
Why does this matter?
This helps make AI accessible to individuals and businesses while avoiding limitations and high expenses associated with commercial APIs. Locally deploying LLM also helps businesses have more over the model, customize it, integrate with existing systems, and enable full utilization of its capabilities.
AI creates lifelike 3D experiences from your phone video
Luma AI has introduced Flythroughs, an app that allows one-touch generation of photorealistic, cinematic 3D videos that look like professional drone captures. Record like you’re showing the place to a friend, and hit Generate– all on your iPhone. No need for drones, lidar, expensive real estate cameras, and a crew.
Flythroughs is built on Luma’s breakthrough NeRF and 3D generative AI and a brand new path generation model that automatically creates smooth dramatic camera moves.
Why does this matter?
This marks a significant leap in democratizing 3D content creation with AI and making it cost-efficient. It opens up new possibilities for storytelling and crafting stunning digital experiences for users across various industries.
Genetic Algorithm Optimized Neural Network Model for Malicious URL Detection
URL Genie is a web application implementing a Multilayer Perceptron Neural Network optimized using genetic algorithms. Detect whether a domain name or URL is malicious by inputting a URL.
– Boosted.ai – AI stock screening, portfolio management, risk management
– JENOVA – AI stock valuation model that uses fundamental analysis to calculate intrinsic value
– Danielfin – Rates stocks and ETFs with an easy-to-understand global AI Score
– Comparables.ai – AI designed to find comparables for market analysis quickly and intelligently
Daily AI Update News from OpenCopilot, Google, Luma AI, AI2, and more
AI Copilot for your own SaaS product – OpenCopilot allows you to have your own product’s AI copilot. It integrates with your underlying APIs and can execute API calls whenever needed. It uses LLMs to determine if the user’s request requires calling an API endpoint. Then, it decides which endpoint to call and passes the appropriate payload based on the given API definition.
Teach LLMs to Personalize – New Google research has proposed a general approach for personalized text generation using LLMs, inspired by the practice of writing education. Upon evaluation, the results showed significant improvements over a variety of baselines.
Introducing Flythroughs, an app that creates lifelike 3D experiences from your phone video – It allows one-touch generation of photorealistic, cinematic videos that look like professional drone captures. No need for drones, lidar, expensive real estate cameras, and a crew. Record like you’re showing the place to a friend, and hit Generate; all on your iPhone.
Big brands are increasingly using AI-generated ads, including Nestlé and Mondelez – More brands see generative AI as a means to make creating an ad less painful and costly. However, there are concerns over whether to let people know it’s AI-generated, whether AI ads can receive copyright protections, and security risks when using AI.
AI2 drops the biggest open dataset yet for training language models – Language models like GPT-4 and Claude are powerful and useful. Still, the data on which they are trained is a closely guarded secret. The AI2’s (Allen Institute for AI) new, huge text dataset, Dolma, is free to use and open to inspection.
Ex-Machine Zone CEO launches BeFake, an AI-based social media app – Alias Technologies has introduced BeFake, a social media app for digital self-expression. Now available on both the App Store and Google Play, it aims to offer a refreshing alternative to the conventional reality portrayed on existing social media platforms.
Some of the world’s biggest advertisers, from food giant Nestle to consumer goods multinational Unilever, are experimenting with using generative AI software like ChatGPT and DALL-E to cut costs and increase productivity.
The New York Times may sue OpenAI over its AI chatbot ChatGPT, which uses the newspaper’s stories to generate text. The paper is unhappy that OpenAI is not paying for the use of its content and is also worried that ChatGPT could reduce its online traffic by providing answers based on its reporting.
Mantella allows you to have natural conversations with NPCs in Skyrim using your voice by leveraging Whisper for speech-to-text, ChatGPT for text generation, and xVASynth for text-to-speech. NPCs also have memories of your previous conversations and have awareness of in-game events.
British Prime Minister Rishi Sunak is set to spend 100 million pounds ($130 million) to buy thousands of computer chips to power artificial intelligence amid a global shortage and race for computing power.
Imagine a world where you reside in a luxurious home, an architectural marvel adorned with every comfort and amenity that one could possibly fathom. But it doesn’t stop there; your creativity knows no bounds, and you envision entire universes with their own laws of physics, teeming with diverse civilizations.
As you journey through life, your passions take an intriguing turn, guiding you towards the realm of digital marketing.
Yet, amid this shift in interests, a captivating question continues to linger in your mind like an enigmatic riddle: “If I possessed the power to design anything in the world, what wondrous creation would spring forth from my imagination?”
As your knowledge expands and your expertise in digital marketing deepens, you become acquainted with the remarkable world of graphic design software. Herein lies the key to unlock the gateway to your wildest ideas and aspirations.
With the vast array of possibilities that graphic design software offers, you come to realize that you can bring to life virtually anything your mind can conceive – and that realization holds true for anyone daring enough to venture into this realm.
While some graphic design software tools are tailored to cater to specialized fields, such as web design software that masters the dynamic nature of webpages or CAD software that focuses on technical drawings, at its core, graphic design software is an all-encompassing and versatile tool. It empowers individuals to transform their creative visions into tangible realities.
Within the confines of this article, we shall embark on a journey exploring the finest AI design software tools currently available. These cutting-edge tools are poised to revolutionize the design process and elevate your artistic capabilities to unprecedented heights.
By leveraging the power of artificial intelligence, these tools open up new horizons, enabling you to streamline and automate your design workflow like never before.
So, fasten your seatbelts and prepare to delve into the realm of limitless creativity. In the following sections, we shall uncover the potentials of AI-driven design software and how they stand as testaments to the boundless human imagination.
It’s time to manifest your artistic dreams into reality – let the voyage commence!
When it comes to harnessing the power of AI for creating mesmerizing visual graphics, few tools can rival the prowess of Adobe Photoshop CC. Renowned across the globe, this software stands as a beacon of creativity and innovation, empowering artists, designers, and digital enthusiasts to bring their imaginations to life in the most astonishing ways.
At the heart of Adobe Photoshop CC lies an impressive array of features that cater to every aspect of design. Whether you aim to craft captivating illustrations, design stunning artworks, or manipulate photographs with unprecedented precision, this software has got you covered.
With its user-friendly interface and intuitive controls, even those new to the world of digital design can quickly find themselves delving into the realm of endless possibilities.
One of the standout strengths of Photoshop lies in its ability to produce highly realistic and detailed images. From refining minute details in portraits to creating breathtaking landscapes, the software’s tools and filters enable artists to achieve a level of precision that defies belief.
The result is a visual masterpiece that captures the essence of the creator’s vision with unparalleled fidelity.
But Photoshop is not merely limited to polishing existing images; it opens the gates to boundless creativity by allowing users to remix and combine multiple images seamlessly. Whether it’s composing fantastical scenes or crafting surreal montages, the software’s blending capabilities grant designers the freedom to construct their own visual universes.
What truly sets Adobe Photoshop CC apart from the rest is its ingenious integration of artificial intelligence. The inclusion of AI-driven features elevates the design process to a whole new dimension.
Dull and lackluster photographs transform into jaw-dropping works of art with just a few clicks, as the software’s AI algorithms intelligently enhance colors, textures, and lighting, breathing life into every pixel.
Adobe’s suite of creative tools, including the likes of Adobe Illustrator and others, work in seamless harmony with Photoshop. This synergy empowers designers to amplify their creative potential even further.
Whether you’re crafting a logo, designing a website, or creating intricate vector graphics, the integration of these tools allows you to transcend the boundaries of imagination.
Planner 5D stands as an ingenious AI-powered solution, offering you the gateway to realize your long-cherished dream of a perfect home or office space. With its cutting-edge technology, this software empowers you to dive into the realm of architectural creativity and interior design like never before.
The first remarkable feature that sets Planner 5D apart is its AI-assisted design capabilities. Imagine describing your ideal home or office, and watch as the AI effortlessly translates your vision into a stunning 3D representation. From grand entrances to cozy corners, the AI understands your preferences, ensuring that every aspect of your dream space aligns with your desires.
Gone are the days of struggling with pen and paper to create floor plans. Planner 5D streamlines the process, enabling you to effortlessly design detailed and precise floor plans for your dream space.
Whether you seek an open-concept layout or a series of interconnected rooms, this software provides the tools to bring your architectural visions to life.
But that’s not all – Planner 5D goes above and beyond to cater to every facet of interior design. With an extensive library of furniture and home décor items at your disposal, you can furnish and decorate your space with ease.
From stylish sofas and elegant dining tables to enchanting wall art and lighting fixtures, the possibilities are limitless.
The user-friendly 2D/3D design tool within Planner 5D is a testament to the software’s commitment to simplicity and innovation. Whether you’re an aspiring designer or a seasoned professional, navigating through the interface is a breeze, allowing you to create the perfect space for yourself, your family, or your business with utmost ease and precision.
For those seeking a more hands-off approach, Planner 5D also offers the option to hire a professional designer through their platform. This feature is a boon for individuals who desire a polished and expertly curated space but prefer to leave the intricate details to the experts.
By collaborating with skilled designers, you can rest assured that your dream home or office will become a reality, tailored to your unique taste and requirements.
Uizard emerges as a game-changing tool that holds the power to transform the creative process for founders and designers alike. This innovative software enables you to breathe life into your ideas by swiftly converting your initial sketches into high-fidelity wireframes and stunning UI designs.
Gone are the days of spending endless hours painstakingly crafting wireframes and prototypes manually. With Uizard, the transformation from a low-fidelity sketch to a polished, high-fidelity wireframe or UI design can occur within mere minutes.
The speed and efficiency afforded by this cutting-edge technology empower you to focus on refining your concepts and iterating through ideas at an unprecedented pace.
Whether your vision encompasses web apps, websites, mobile apps, or any digital platform, Uizard stands as a reliable companion, streamlining the design process with its versatility. You no longer need to possess extensive design expertise, as the tool is intuitively designed to cater to users of all backgrounds and skill levels.
From tech-savvy founders to aspiring entrepreneurs, Uizard ensures that the creative journey remains accessible and enjoyable for everyone.
The user-friendly interface of Uizard opens up a realm of possibilities, allowing you to bring your vision to life with ease. Its intuitive controls and extensive feature set empower you to craft pixel-perfect designs that align with your unique style and brand identity.
Whether you’re a solo founder or part of a dynamic team, Uizard fosters seamless collaboration, enabling you to share and iterate on designs effortlessly.
One of the most significant advantages of Uizard lies in its ability to gather invaluable user feedback on your designs. By sharing your wireframes and UI designs with stakeholders, clients, or potential users, you can gain insights and refine your creations based on real-world perspectives.
This not only accelerates the decision-making process but also ensures that your final product resonates with your target audience.
Enter the extraordinary realm of 3D animation with Autodesk Maya, a software that transcends conventional boundaries to grant you the power to breathe life into expansive worlds and intricate characters. Whether you’re an aspiring animator, a seasoned professional, or a visionary storyteller, Maya provides the tools to transform your creative visions into stunning reality.
Imagination knows no bounds with Maya, as its powerful toolsets empower you to embark on a journey of endless possibilities. From the grandest of cinematic tales to the most whimsical of animated adventures, this software serves as your creative canvas, waiting for your artistic touch to shape it.
Complexity is no match for Maya’s prowess, as it deftly handles characters and environments of any intricacy. Whether you seek to create lifelike characters with nuanced emotions or craft breathtaking landscapes that transcend the boundaries of reality, Maya’s capabilities rise to the occasion, ensuring that your artistic endeavors know no limits.
Designed to cater to professionals across various industries, Maya stands as the perfect companion for crafting high-quality 3D animations for movies, games, and an array of other purposes. Its versatility makes it a go-to choice for animators, game developers, architects, and designers alike, unleashing the potential to tell stories and visualize concepts with stunning visual fidelity.
The heart of Maya lies in its engaging animation toolsets, each one carefully crafted to nurture the growth of your virtual world. From fluid character movements to dynamic environmental effects, Maya opens the doors to your creative sanctuary, enabling you to weave intricate tales that captivate audiences across the globe.
But the journey doesn’t end there – with Autodesk Maya, you are the architect of your digital destiny. As you explore the depths of this software, you discover its seamless integration with other creative tools, expanding your capabilities even further.
The synergy between Maya and its counterparts unlocks new avenues for innovation, granting you the freedom to experiment, iterate, and refine your creations with ease.
Aimed at architects, engineers, and a myriad of other professionals, this cutting-edge tool serves as a gateway to bring your imaginative designs to life with astonishing realism.
Architects find solace in Autodesk 3Ds Max as it empowers them to create stunningly realistic models of their architectural wonders. Engineers, too, discover the power of this software in crafting intricate and precise 3D models of their mechanical and industrial designs.
The software becomes a haven for creative professionals seeking to visualize and communicate their concepts with exceptional clarity and visual fidelity.
Beyond the realms of architecture and engineering, Autodesk 3Ds Max knows no bounds. Its versatility allows you to explore various dimensions of creativity, from crafting intricate product prototypes to fashioning enchanting animated characters.
Whatever your design aspirations may be, this software stands as a reliable companion, ensuring that your visions manifest into awe-inspiring digital realities.
In the fast-paced world of business and design, having a tool that caters to multiple purposes becomes invaluable. Autodesk 3Ds Max stands tall as a versatile and adaptable solution, making it a coveted asset for businesses and individuals alike.
Its potential to enhance the visual storytelling capabilities of any venture unlocks a new era of creativity and communication.
One of the most cherished qualities of Autodesk 3Ds Max lies in its user-friendly interface, fostering a seamless and intuitive design process. With this tool at your disposal, iteration becomes a breeze, allowing you to refine your designs effortlessly.
Each new iteration becomes a steppingstone towards perfection, ensuring that your final creation exudes excellence.
With Foyr Neo at your disposal, you can witness the transformation of your design ideas into reality in as little as a fifth of the time it takes with other software tools.
Gone are the days of grappling with complex design interfaces and spending endless hours on a single project. Foyr Neo streamlines the journey from a floor plan to a finished render, presenting you with a user-friendly interface that simplifies every step of the design process.
With its intuitive controls and seamless functionality, the software becomes an extension of your creative vision, ensuring that your ideas manifest into remarkable designs with utmost ease.
To further elevate your experience, Foyr Neo provides a thriving community and comprehensive training resources. This collaborative ecosystem allows you to connect with fellow designers, share insights, and gain inspiration from the collective creative pool.
Additionally, the abundance of training materials and support ensures that you can unlock the full potential of the software, mastering its capabilities and expanding your design horizons.
Bid farewell to the hassle of juggling multiple tools to complete a single project – Foyr Neo serves as the all-in-one solution to cater to your design needs. By integrating various design functionalities within a single platform, the software streamlines your workflow, saving you precious time and effort.
This seamless experience fosters uninterrupted creativity, enabling you to focus on the art of design without the burden of managing disparate software tools.
With this cutting-edge software, you can witness a remarkable increase in image resolution of up to 16 times, all without sacrificing an ounce of quality.
Gone are the days of tedious manual editing, spending hours painstakingly enhancing your images pixel by pixel. Let’s Enhance simplifies the process, offering a swift and efficient solution to elevate your photos’ quality with ease.
Whether you’re a professional photographer seeking crisper images for print or a social media enthusiast aiming to enlarge your visuals, this software promises to deliver the perfect shot every time.
Let’s Enhance’s proficiency in improving image resolution, colors, and lighting automatically alleviates the burden of post-processing. By entrusting this task to the intelligent algorithms of the software, you gain more time to focus on the core aspects of your business or creative endeavors.
Embrace the art of delegation and allow Let’s Enhance to handle the technicalities while you concentrate on perfecting your artistic vision.
The applications of Let’s Enhance are vast and diverse. Photographers, designers, artists, and marketers alike can benefit from this powerful tool. Imagine effortlessly preparing your images for print, knowing they’ll boast impeccable clarity and sharpness.
Envision your social media posts grabbing attention with larger-than-life visuals, thanks to Let’s Enhance’s seamless enlargement capabilities.
But Let’s Enhance doesn’t stop at resolution enhancement. It also becomes a reliable ally in refining color palettes, breathing new life into dull or faded images, and balancing lighting for picture-perfect results.
Whether it’s subtle adjustments or dramatic transformations, the software empowers you to create visuals that captivate audiences and leave a lasting impression.
6 AI Text to Video compared (updated August 2023 ) Link
Runway Features
– Text-to-video feature – Automatic prompt suggestions – The option to upload an image for reference – Different previews to choose from before generating a video – Free plan to test the tool out
Pros
– Best of AI text-to-video research – Comprehensive set of tools for video editing – Available as both a desktop and mobile app
Cons
– Gen-2 has limitations in generating intricate details, like fingers – Gen-2 video generation is limited to 4 seconds per video – The tool does not offer text-to-speech capabilities
Synthesia AI Features
– 120+ voices and accents – 140+ diverse AI avatars – 60+ video templates designed by professional designers – The option to have a custom avatar created
In today’s world, messaging apps are becoming increasingly popular, with WhatsApp being one of the most widely used. With the help of artificial intelligence, chatbots have become an essential tool for businesses to improve their customer service experience. Chatbot integration with WhatsApp has become a necessity for businesses that want to provide a seamless and efficient customer experience. ChatGPT is one of the popular chatbots that can be integrated with WhatsApp for this purpose. In this blog post, we will discuss how to integrate ChatGPT with WhatsApp and how this chatbot integration with WhatsApp can benefit your business.
The site uses openAI to generate trivia on anything and everything you want ! You can then revisit trivia you or others have made and replay them at anytime.
Solo & real time multiplayer, daily challenge, infinite playability and is getting updates daily !
Current feature roadmap :
jeopardy mode ( multiple topics and large question count )
email / sms notifications for new daily challenges etc.
public lobbies / multiplayer against random players
IBM’s study indicates that 40% of the global workforce, or 1.4 billion people, will need to reskill in the next three years due to AI’s rise.
While AI technologies, such as generative models, might shift job responsibilities, 87% of surveyed executives believe AI will augment jobs rather than replace them.
The focus in job skills has shifted from technical STEM skills (most important in 2016) to people skills like team management and adaptability (most important in 2023).
Meta did it first… Generative AI for producers
Generative AI is revolutionizing this decade’s technology, breaking into the realm of creativity once reserved for humans. Jobs are shifting, with some roles being replaced and others benefiting from AI assistance.
Content creators, take note! Meta just revealed that platforms like Facebook and Instagram will employ AI to produce music. This means no more copyright issues or losing business. Simply choose a genre, provide a sample, and the AI crafts tailor-made music for your videos.
Facebook’s music library becomes obsolete as Meta leads the way, while YouTube and TikTok will likely follow suit. As a content creator, AI eliminates rights concerns. However, creators of original music may face challenges.
AI’s impact extends to various fields, affecting writers, musicians, artists, and photographers. While some might feel the pinch, the creative economy as a whole benefits, making custom content creation easier.
Imagine conceiving, designing, and animating with AI—a reality that even big players like Disney face. This emerging world is thrilling and transformative.
To prepare, embrace AI. Integrate it into your work wherever possible. If you want to stay ahead and not fall behind to AI, leverage its capabilities.
Trustworthy LLMs: A survey and guideline for evaluating LLMs’ alignment
Ensuring alignment, which refers to making models behave in accordance with human intentions, has become a critical task before deploying LLMs in real-world applications. This new research has proposed a more fine-grained taxonomy of LLM alignment requirements. It not only helps practitioners unpack and understand the dimensions of alignments but also provides actionable guidelines for data collection efforts to develop desirable alignment processes.
It also thoroughly surveys the categories of LLMs that are likely to be crucial to improve their trustworthiness and shows how to build evaluation datasets for alignment accordingly.
The tool curates high-quality data that leads to improved LLM downstream performance and will significantly benefit LLM developers attempting to build pretraining datasets.
Microsoft-DataBricks collab may hurt OpenAI
Microsoft is reportedly planning to sell a new version of Databricks software, It helps customers create AI applications for their businesses. This move could potentially harm OpenAI, as Databricks allows companies to develop AI models from scratch or repurpose open-source models instead of licensing OpenAI’s proprietary ones.
Microsoft has been aggressively investing in AI services and integrating AI functionality across its products. Neither Microsoft nor Databricks have commented on the report.
What else happened in AI this week of August 12-20?
Google appears to be readying new AI-powered tools for ChromeOS
Zoom rewrites policies to make clear user videos aren’t used to train AI
Anthropic raises $100M in funding from Korean telco giant SK Telecom
Modular, AI startup challenging Nvidia, discusses funding at $600M valuation
California turns to AI to spot wildfires, feeding on video from 1,000+ cameras
FEC to regulate AI deepfakes in political ads ahead of 2024 election
Google’s AI search offers AI-generated summaries, definitions, and coding improvements.
Google Photos introduce a new AI feature, ‘Memories view’!
Amazon using AI to enhance product reviews.
WhatsApp test beta upgrade with new feature ‘custom AI-generated stickers’.
Google is testing an AI assistant that will give you Life Advice.
Robomart adopts “store-hailing” for self-driving stores delivered to customers.
OpenAI acquires AI design studio Global Illumination to work on core products, ChatGPT
The Associated Press releases guidelines for Generative AI to its journalists
Consulting giant McKinsey unveils its own generative AI tool for employees: Lilli
Opera for iOS will now include Aria, its browser AI built in collaboration with OpenAI
UK is using AI road safety cameras to detect potential driver offenses in passing vehicles
Adobe Express with AI Firefly app, now out of beta, is available worldwide
Ex-Google Brain researchers have started an AI research company called Sakana AI in Tokyo.
Runway, a gen AI video startup, has launched a new ‘Watch’ feature.
Research shows AI bots beat CAPTCHA and humans.
ML startup Arthur launched an open-source tool to help find the best LLM.
Buildbox has launched a new tool called StoryGames.AI!
Latest Tech News and Trends on August 20th, 2023
Major concerns after Cruise robotaxi incidents
Following a recent collision between a Cruise robotaxi and a fire truck in San Francisco, the California DMV requested Cruise to halve its robotaxi fleet in the city.
The state agency is investigating “recent concerning incidents” with Cruise vehicles, emphasizing the need to ensure the safety of the public sharing the road with these autonomous vehicles.
This specific accident saw a Cruise Chevy Bolt EV hit by an emergency vehicle at an intersection, resulting in passenger injuries; it adds to a series of issues potentially affecting Cruise’s future operations.
As wildfires spread, Canadian leaders ask Meta to reverse its news ban
The Canadian government demands that Meta lift its ban on domestic news sharing, citing its impact on sharing information about wildfires.
Meta blocked news on Facebook and Instagram due to a new law requiring payment for news articles, but this move hampers access to crucial information.
Officials and citizens express concerns, urging Meta to reinstate news sharing for safety and emergency information during the wildfire crisis.
X to remove ‘block’ feature
Elon Musk suggests that Twitter’s block feature, except for direct messages, may be removed, causing concern among users.
Blocking is currently used to restrict interactions and visibility of accounts, while mute only hides posts; users value blocking for spam control and harassment prevention.
Musk’s statement prompts backlash and uncertainty about whether the feature will actually be removed.
Discover the OpenAI code interpreter, an AI tool that translates human language into code. Learn about its functions, benefits and drawbacks in this guide.
The basics of OpenAI code interpreter
OpenAI, a leading entity in the field of artificial intelligence, has developed OpenAI code interpreter, a specialized model trained on extensive data sets to process and generate programming code.
OpenAi code interpreter is a tool that attempts to bridge the gap between human language and computer code, offering myriad applications and benefits. It represents a significant step forward in AI capabilities. It is grounded in advanced machine learning techniques, combining the strengths of both unsupervised and supervised learning. The result is a model that can understand complex programming concepts, interpret various coding languages, and generate human-like responses that align with coding practices.
New Generations of People Are Becoming More and More Indistinguishable from AI
One of the most concerning aspects of this trend is the way that new generations are rewriting previous information. In the past, people would typically come up with their ideas and opinions. However, today, it is much more common for people to simply rewrite information that they have found online. This is a trend that is being exacerbated by the rise of large language models (LLMs), which can generate text that is nearly indistinguishable from human-written text. Article: new-generations-of-people-are-becoming-more-and-more-indistinguishable-from-ai/
Neolithics, an agritech company based in Israel, is using artificial intelligence and machine learning to reduce food waste and ensure food safety and quality through its optical sensing AI-powered solution known as Crystal.eye™. This technology, which can be mounted and configured in various ways, automates and upgrades quality control for fresh produce, in order to maximize utilization and reduce waste.
While the normal spectrum of visible light has 3 colors – red, green, and blue, Crystal.eye™ uses hyperspectral imaging, with over 400 spectra of light. This light can penetrate deep into a fruit or vegetable and allows the device to scan even inside the sample, eliminating the need to cut it open or grind it.
The images produce a unique fingerprint, which is then analyzed by Neolithics’ food scientists to identify various characteristics, such as firmness, moisture content, sugar content, acidity, and many more. The data is then fed to an AI machine learning engine, allowing the system to scan and analyze a large batch of samples in a matter of seconds.
The outcomes of the inspections are then instantly displayed on a digital dashboard and can be delivered as reports, tailored to each customer’s unique requirements. For example, french fry makers need to know how much dry matter is contained in the potatoes they process, while winemakers take into account the grapes’ acidity and sweetness to obtain the flavor profile they desire.
Using Crystal.eye™ allows growers and distributors to greatly expand their sampling, from the usual 1% to around 30% to 40%. This ensures greater accuracy and significantly reduces the chance of produce being discarded due to not meeting the customers’ requirements.
According to Wayne Nathanson, the company’s VP for Global Development, knowledge in food science is Neolithics’ main differentiator. While there are other companies that make the hardware to move around and sort fruits and vegetables, he says that usually these technologies work on exterior qualities, and aren’t able to analyze the produce’s interior. Most companies do not have a team of expert food scientists to fully harness the information gathered from the produce like Neolithics, he adds.
Currently, Crystal.eye™ can check the content or defects of produce, providing customers with various external or internal attributes. This solution has been launched and is being used by an increasing number of growers, distributors, and food processing companies. At the end of this year, Neolithics expects to update the technology with the capability to assess the produce’s maturity cycle, allowing customers to identify how long it will take before it spoils. The company is also working on being able to identify traces of pesticides and other banned chemicals on the produce, with release estimated for next year.
“Sustainability is very important to Neolithics, and our mission is to reduce food waste and improve food safety. Knowing how much food is wasted daily is a major motivator for making a difference. We want to eliminate food wastage across the supply chain, including removing the need to destroy the produce when it’s being inspected. We also want to get more edible quality produce to the consumer, by helping the various links of the supply chain distribute it better. There are 1.3 billion tons of wasted food annually, and there are roughly a billion people in the world experiencing hunger. We believe there’s an opportunity to feed more people with the food that is thrown out. This becomes more and more critical, the closer the world population gets to the 10 billion mark,” Nathanson says.
The new AI programming jobs that require only very basic programming skills
There has never been a more exciting and promising time to get into AI development. Forbes reports that job listings for ChatGPT-related positions increased 21 times since last November:
They need both prompt engineers and programmers. But because of Copilot and other advances in AI programming they are looking for people with some basic programming skills but who mainly excel in advanced critical analysis and reasoning skills.
They basically need people who know how to think so for people with IQs above 130, (in the genius range) this could be a dream career. But really it’s not so much about IQ as it is about the ability to think rather than just mostly learn and remember. In fact programming courses must already be teaching this brand new kind of prompt engineering and programming.
I imagine that computer programming instruction is going through very rapid evolution right now as teaching fundamental programming skills more and more gives way to teaching how to most quickly and intelligently prompt AIs to do whatever programming is needed.
If incumbent programming schools are not changing fast enough they risk losing a substantial market share to startups that begin teaching much more marketable skills.
Many businesses today want to start using AIs but they don’t know how to go about it. Computer programmers and prompt engineers who can explain all of this to them have a ready and rapidly growing job market.
Yeah there could never be a better time to get into computer programming!
The importance of making superintelligent small LLMs
Google’s Gemini will set a new standard in AI largely because of the massive data set that it is trained on.
If you’re not familiar with Gemini yet, watch this amazingly intelligent 8-minute YouTube video:
The next step would be for Google to train that stronger intelligence to shift from relying on data to relying on principles for its logic and reasoning.
Once AI’s intelligence is based on principles, subsequent iterations will no longer require massive data for their training.
That achievement will level the playing field so that Gemini is much sooner joined by competitive or stronger models.
Once that happens, everything will get very intelligent.
As Hollywood strikes, 96% of entertainment companies are boosting generative AI spend
As the Hollywood strike continues, 96% of entertainment companies are ramping up their investments in generative AI, revealing a shift in the industry’s approach to content creation and potential concerns for its workforce.
If you want to stay ahead of the curve in AI and tech, look here first.
The rise in AI spending amidst the Hollywood strike
The Hollywood writer’s strike underscores a shift in the entertainment industry’s investment strategy.
Lucidworks’ research, one of the largest of its kind, shows 96% of executives prioritize generative AI investments.
Countries like China, the UK, France, India, and the U.S. have companies heavily investing in this technology.
AI’s potential impact on Hollywood content creation
Generative AI can produce content, virtual environments, and images, posing a potential disruption to traditional methods.
Predictions suggest that by 2025, up to 90% of Hollywood content could be influenced by AI.
There’s a growing concern among Hollywood writers about the rapid integration of AI and its effect on their careers.
The future of the entertainment industry with generative AI
The emergence of synthetic actors could revolutionize the way movies and shows are produced.
AI-driven actors don’t strike, age, or demand pay raises, presenting potential benefits for studios but challenges for human actors.
Microsoft is reportedly planning to sell a new version of Databricks software, It helps customers create AI applications for their businesses. This move could potentially harm OpenAI, as Databricks allows companies to develop AI models from scratch or repurpose open-source models instead of licensing OpenAI’s proprietary ones.
Microsoft has been aggressively investing in AI services and integrating AI functionality across its products. Neither Microsoft nor Databricks have commented on the report.
Why does this matter?
Microsoft’s reported intention to introduce an AI-focused Databricks software version carries implications for OpenAI. This software empowers businesses to craft AI solutions without relying on OpenAI’s proprietary models, potentially impacting OpenAI’s market.
Meta AI’s new RoboAgent with 12 skills
Meta and CMU Robotics Institute’s New Robotics research: RoboAgent. It is a universal robotic agent that can efficiently learn and generalize a wide range of non-trivial manipulation skills. It can perform 12 skills across 38 tasks, including object manipulation and re-orientation, and adapt to unseen scenarios involving different objects and environments.
The development of the RoboAgent was made possible through a distributed robotics infrastructure, a unified framework for robot learning, and a high-quality dataset. The agent also utilizes a language-conditioned multi-task imitation learning framework to enhance its capabilities. Meta is open-sourcing RoboSet, a large, high-quality robotics dataset collected with commodity hardware, to support and accelerate open-source research in robot learning.
Why does this matter?
RoboAgent has the potential to accelerate automation, manufacturing, and daily tasks as the end users can enjoy more capable and helpful robots at home. Industries can streamline operations with efficient automation, technology could push AI and robotics boundaries, and innovation might surge across sectors.
Meta challenges OpenAI with code-gen free software
Meta is set to release Code Llama, an open-source code-generating AI model that competes with OpenAI’s Codex. The software builds on Meta’s Llama 2 model and allows developers to automatically generate programming code and develop AI assistants that suggest code.
Llama 2 disrupted the AI industry by enabling companies to create AI apps without relying on proprietary software from major players like OpenAI, Google, or Microsoft. Code Llama is expected to launch next week, further challenging the dominance of existing code-generating AI models in the market.
Why does this matter?
Meta’s Code Llama is set to rival OpenAI’s Codex; this open-source AI model is an update of Meta’s Llama 2. This tool challenges giants like OpenAI, Google, and Microsoft, giving developers more control and reducing dependence on their proprietary tools.
AP sets new AI guidelines for newsrooms
The Associated Press has established standards for the use of generative AI in its newsroom, emphasizing that AI is not a replacement for human journalists and cautioning against creating publishable content with AI-generated text or images.
AP journalists are directed to treat AI-generated content as “unvetted source material” and apply editorial judgment and sourcing standards before considering it for publication.
The organization warns about the potential for AI to spread misinformation and advises its journalists to exercise caution, skepticism, and verify sources when dealing with AI-generated content.
Latest Tech News and Trends on August 18th, 2023
Scientists are leaving X
A significant portion of scientific researchers using X have reduced their usage or left the platform altogether, with over 47% decreasing usage and nearly 7% quitting, according to a survey by Nature.
About 47% of polled researchers have turned to alternative platforms, with Mastodon being the most popular, followed by LinkedIn and Instagram.
The change in researcher behavior on X is attributed to the platform’s evolving dynamics, increased content prioritization, and limited accessibility of its API for researchers.
Amazon imposes fees on self-shipping sellers
Starting from October 1st, third-party merchants on Amazon who ship their own packages will be required to pay a 2% fee per product sold.
This new fee is in addition to other charges Amazon already receives from merchants, including selling plan costs and referral fees based on product categories.
The fee comes as Amazon’s marketplace is under scrutiny, with the FTC planning to file an antitrust lawsuit over allegations that Amazon rewards third-party merchants using its logistics services while penalizing those fulfilling their own orders.
NYC bans TikTok from government devices
New York City is banning TikTok from government devices within 30 days, with immediate prohibition on downloading and usage by employees.
The NYC Cyber Command cited TikTok as a security threat to the city’s technical networks, prompting the decision.
While some states have broadly banned TikTok, most have restricted its use on government-owned tech, amid ongoing debates about the app’s security risks.
Unraveling August 2023: August 17th, 2023
Latest AI News and Trends on August 17th, 2023
You can now write one sentence to train an entire ML model.
How does it work?
You just describe the ML model you want…a chain of AI systems will take that sentence…it generates a dataset based on that sentence…and it trains a model for you…in ten minutes 😳
What does that mean?
Custom models in AI just got a whole lot easier. You can go from an idea (“a model that writes Python functions”) to a fully trained custom Llama-2 model in minutes 😮
Why should I care?
If you aren’t thinking about the impact of change in your industry, start now. It’s not linear and continuous, it’s exponential with step functions. 3 out of 4 C-suite executives believe that if they don’t scale artificial intelligence in the next five years, they risk going out of business entirely.
What should I do about it?
Further proof that AI is changing our work processes rapidly. You need to build a team and org that’s first and foremost, ready for change. And if you haven’t started pulling together an AI working group to get cracking on your AI usage principles and first AI use case, do it.
GPT-4 Code Interpreter masters math with self-verification
OpenAI’s GPT-4 Code Interpreter has shown remarkable performance on challenging math datasets. This is largely attributed to its step-by-step code generation and dynamic solution refinement based on code execution outcomes.
Expanding on this understanding, new research has introduced the innovative explicit code-based self-verification (CSV) prompt, which leverages GPT4-Code’s advanced code generation mechanism. This prompt guides the model to verify the answer and then reevaluate its solution with code.
The approach achieves an impressive accuracy of 84.32% on the MATH dataset, significantly outperforming the base GPT4-Code and previous state-of-the-art methods.
Why does this matter?
The study provides the first systematic analysis of the role of code generation, execution, and self-debugging in mathematical problem-solving. This highlights the importance of code understanding and generation capabilities in LLMs. Plus, the ideas presented can help build high-quality datasets that could potentially help improve the mathematical capabilities in open-source LLMs like Llama-2.
Multi-level machine learning models for estimating the risk of delay between cancer diagnosis and treatment initiation in a large group of cancer patients.
Study significance
Machine learning models that incorporate multi-level data sources can effectively identify cancer patients who are at a greater risk of experiencing treatment delays of more than 60 days after their initial cancer diagnosis.
Although neighborhood-level social determinants of health are incorporated in the study model as contributing variables, no significant impact of these factors was observed on the model performance. Furthermore, the model exhibits lower predictive effectiveness in vulnerable populations.
Future studies should include a higher proportion of vulnerable populations and more relevant social variables to improve the model performance.
Journal reference:
Frosch Z. A. K., Hasler, J., Handorf, E., et al. (2023). Development of a Multilevel Model to Identify Patients at Risk for Delay in Starting Cancer Treatment. JAMA Network Open. doi:10.1001/jamanetworkopen.2023.28712, https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2808249
Discover the top AI jobs shaping the future. Navigate the booming AI industry with insights on rewarding careers.
1. AI product manager
An AI product manager is similar to other program managers. Both jobs require a team leader to develop and launch a product. In this case, it is an AI product, but it’s not much different from any other product in terms of leading teams, scheduling and meeting milestones.
AI product managers need to know what goes into making an AI application, including the hardware, programming languages, data sets and algorithms, so that they can make it available to their team. Creating an AI app is not the same as creating a web app. There are differences in the structure of the app and the development process.
2. AI research scientist
An AI research scientist is a computer scientist who studies and develops new AI algorithms and techniques. They develop and test new AI models, collaborate with other researchers, publish research papers and speak at conferences. So, programming is only a small portion of what a research scientist does.
The tech industry is extremely open to self-taught and non-formally trained programmers, but it makes an exception for AI research scientists. They need to have a strong understanding of computer science, mathematics and statistics. Typically, they need graduate degrees.
3. Big data engineer
AI works with large data sets and so does its precursor, big data. A big data engineer is similar to an AI engineer because they are responsible for designing, building, testing and maintaining complex data processing systems that work with large data sets. But, instead of working with GPT or LaMDA, they work with big data tools, like Hadoop, Hive, Spark and Kafka.
Like AI researchers, big data engineers often have advanced degrees in mathematics and statistics. These degrees are necessary for designing, maintaining and building data pipelines based on massive data sets.
Business intelligence (BI) is also a data-driven discipline that predates the modern AI rush. Like big data and AI, BI also relies on large data sets. BI developers use data analytics platforms, reporting tools and visualization techniques to turn raw data into meaningful insights to help organizations make informed decisions.
BI developers work with a variety of coding languages and tools from major vendors, including SQL, Python, Tableau from Salesforce and Power BI from Microsoft. They also need to have a strong understanding of business processes to help improve them through data insight.
5. Computer vision engineer
A computer vision engineer is a developer who specializes in writing programs that utilize visual input sensors, algorithms and systems. These systems see the world around them and act accordingly, such as self-driving and self-parking cars and facial recognition.
They use languages like C++ and Python, along with visual sensors, such as Mobileye from Intel. Examples of use cases include object detection, image segmentation, facial recognition, gesture recognition and scenery understanding.
6. Data scientist
A data scientist is a technology professional who collects, analyzes and interprets data to solve problems and drive decision-making within the organization. They are not necessarily programmers, although many do write their own applications. Mostly, they use data mining, big data and analytical tools.
Their use of business insights derived from data enables businesses to improve sales and operations; make better decisions; and develop new products, services and policies. They use predictive modeling to forecast future events, such as customer churn, and data visualization to display research results visually. Some also use machine learning to build models to automate these tasks.
7. Machine learning engineer
A machine learning engineer is responsible for developing and implementing machine learning training algorithms and models. Training is the demanding side of machine learning and is the most processor- and computation-intensive aspect of machine learning. Therefore, it requires the highest level of skill and training.
Because of the need for advanced math and statistics skills, most machine learning engineers have advanced degrees in computer science, math or statistics. They often continue training through certification programs or a master’s degree in machine learning, deep learning or neural networks.
8. Natural language processing engineer
A natural language processing (NLP) engineer is a computer scientist who specializes in the development of algorithms and systems that understand and process natural human language input.
One of the big differentiators between traditional search engines and generative AI interfaces, such as ChatGPT, is that search engines use keywords and gather information from large amounts of existing online data. Generative AI creates new content based on other examples and patterns, and it answers queries in a chat-type format.
Like machine learning engineers, NLP engineers are not necessarily programmers first. They need to understand linguistics as much as they need to understand programming. NLP projects require machine translation, text summarization, answering questions and understanding context.
9. Robotics engineer
A robotics engineer is a developer who designs, develops and tests software for running and operating robots. Robotics has advanced significantly in recent years, such as automated home cleaners and precision cancer surgery equipment. Robotics engineers may also use AI and machine learning to boost a robotic system’s performance.
As a result, robotics engineers are typically designing software that receives little to no human input but instead relies on sensory input. Therefore, a robotics engineer needs to debug the software and the hardware to make sure everything is functioning as it should.
Robotics engineers typically have degrees in engineering, such as electrical, electronic or mechanical engineering.
10. Software engineer
A software engineer can cover various activities in the software development chain, including design, development, testing and deployment. Engineering professionals are needed at all points of software development. The demands are so high that it’s rare to find someone well versed in all of them. Most engineers tend to specialize in one discipline.
We spoke with MIT CSAIL head Daniela Rus about the emerging technology of liquid networks and implications for robotics.
The initial research papers date back to 2018, but for most, the notion of liquid networks (or liquid neural networks) is a new one. It was “Liquid Time-constant Networks,” published at the tail end of 2020, that put the work on other researchers’ radar. In the intervening time, the paper’s authors have presented the work to a wider audience through a series of lectures.
Ramin Hasani’s TEDx talk at MIT is one of the best examples. Hasani is the Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT, and served as the paper’s lead author.
“These are neural networks that can stay adaptable, even after training,” Hasani says in the video, which appeared online in January. When you train these neural networks, they can still adapt themselves based on the incoming inputs that they receive.”
The “liquid” bit is a reference to the flexibility/adaptability. That’s a big piece of this. Another big difference is size. “Everyone talks about scaling up their network,” Hasani notes. “We want to scale down, to have fewer but richer nodes.” MIT says, for example, that a team was able to drive a car through a combination of a perception module and liquid neural networks comprised of a mere 19 nodes, down from “noisier” networks that can, say, have 100,000.
“A differential equation describes each node of that system,” the school explained last year. “With the closed-form solution, if you replace it inside this network, it would give you the exact behavior, as it’s a good approximation of the actual dynamics of the system. They can thus solve the problem with an even lower number of neurons, which means it would be faster and less computationally expensive.”
The concept first crossed my radar by way of its potential applications in the robotics world. In fact, robotics make a small cameo in that paper when discussing potential real-world use. “Accordingly,” it notes, “a natural application domain would be the control of robots in continuous-time observation and action spaces where causal structures such as LTCs [Liquid Time-Constant Networks] can help improve reasoning.”
AI reconstructs song from brain activity
Neuroscientists recorded electrical activity from areas of the brain (yellow and red dots) as patients listened to the Pink Floyd song “Another Brick in the Wall, Part 1.” Using AI software, they were able to reconstruct the song from the brain recordings. This is the first time a song has been reconstructed from intracranial electroencephalography recordings.
Why does this matter?
By capturing the musicality of speech through neural signals, this research presents an innovative application of AI that could redefine how we interact and communicate, particularly for those who struggle with traditional modes of communication.
Saudi Arabia and UAE join the race for scarce Nvidia chips
Saudi Arabia has purchased at least 3,000 of Nvidia’s H100 chips at $40,000 apiece, while UAE has ordered a fresh batch of semiconductors to power its LLM. This signals the Gulf states’ intention to become major players in AI by buying up thousands of Nvidia’s GPUs which are vital in powering the boom in generative AI that has swept markets this year.
Why does this matter?
This makes them the latest to join the ever-growing queue of buyers for Nvidia chips to power AI ambitions. But will Nvidia be able to produce enough GPUs to meet the massive demand? It was reported in June that Nvidia GPUs are already in short supply (and very expensive).
Snapchat’s AI chatbot creates unexpected chaos
Snapchat users reported an unexpected video posted on the My AI chatbot’s Story, which some interpreted as showing a corner between a ceiling and a wall.
The unexpected post led to concerns and fears among users, with some believing the AI feature had become sentient or evolved, prompting some to delete the app.
Snapchat described the event as a “temporary outage”, which has since been resolved, and the AI chat feature temporarily stopped responding during this period.
Exploring the Power of Mojo Programming Language
Mojo is a new programming language that combines the usability of Python with the performance of C. It is designed to be the perfect language for developing AI models and applications. Mojo is fast, efficient, easy to use, and open source. Mojo is based on the LLVM (Low Level Virtual Machine) compiler infrastructure, which is one of the most advanced compiler frameworks in the world right now. Mojo uses a new type of system that allows for better performance and error checking. Mojo has a built-in autotuning system that can automatically optimize your code for the specific hardware that you are using.
Genmo is an artificial intelligence-driven video generator that takes text beyond the two dimensions of a page. Algorithms from natural language processing, picture recognition, and machine learning are used to adapt written information into visual form. It can turn text, pictures, symbols, and emoji into moving images. Background colors, characters, music, and other elements are just some of how the videos can be personalized. The movie will include the text and any accompanying images that you provide. The videos can be shared on many online channels like YouTube, Facebook, and Twitter. Videos made by Genmo’s AI can be used for advertising, instruction, explanation, and more. It’s a fantastic resource for companies, groups, and people who must rapidly and cheaply make interesting movies.
D-ID is a video-making platform powered by artificial intelligence that makes producing professional-quality videos from text simple and quick. Using Stable Diffusion and GPT-3, the company’s Creative RealityTM Studio can effortlessly create videos in over a hundred languages. D-ID’s Live Portrait function makes short films out of still images, and the Speaking Portrait function gives a speech to written or spoken text. Its API has been refined with the help of tens of thousands of videos, allowing it to generate high-quality visuals. Digiday, SXSW, and TechCrunch have all recognized D-ID for their ability to help users create high-quality videos at a fraction of the expense of traditional approaches.
The LeiaPix Converter is a web-based, no-cost service that changes regular photographs into 3D Lightfield photographs. It employs AI to turn your images into lifelike, immersive 3D environments. Select the desired output format and upload your picture to LeiaPix Converter. The converted file can be exported in several forms, including the Leia Image Format, Side-by-Side 3D, Depth Map, and Lightfield Animation. The LeiaPix Converter’s output is great quality and straightforward to use. It’s a fantastic way to give your pictures a new feel and make unique visual compositions. It does a 3D Lightfield conversion from a 2D image. Leia Image Format, Side-by-Side 3D, Depth Map, and Lightfield Animation are only a few of the supported export formats that bring about excellent outcomes. Depending on the size of the image, the conversion procedure could take a while. The quality of your original photograph will affect the final conversion outcomes. Because the LeiaPix Converter is currently in beta, it may include problems or have functionality restrictions.
A new open-source framework called instaVerse makes building your dynamic 3D environments easy. The background can be generated in response to AI cues, and players can then create their avatars to explore it. The first step in making a world in InstaVerse is picking a premade layout. Forests, cities, and even spaceships are just some of the many premade options available. After selecting a starter document, an AI assistant will guide you through the customization process. A forest with towering trees and a flowing river are just one of the many landscapes instaVerse may create at your command. Characters can also be generated in your universe. Humans, animals, and even robots are all included in the instaVerse cast of characters. Once a character has been created, you can use the keyboard or mouse to direct its actions. While InstaVerse is still in its early stages, it shows great promise as a robust platform for developing interactive 3D content. It’s simple to pick up and use and lets you make your special universes.
Sketch is a web app for turning sketches into GIF animations. It’s a fun and easy method to make unique stickers and illustrations to share on social media or use in other projects. Using Sketch is as easy as posting your drawing online. Then, you may utilize the drawing tools to give your work some life with some animation. Objects can be repositioned, recolored, and given custom sound effects. You can save your finished animation as a GIF after you’re satisfied. Sketch is a fantastic program for both young and old. It’s a terrific opportunity to show off your imagination and get a feel for the basics of animation simultaneously. In terms of ease of use, Sketch is excellent. Sketch makes it easy to create beautiful animations, even if you have no prior experience with the medium. With Sketch’s many tools, you can design elaborate and intricate animations. You can save your finished animation as a GIF after you’re satisfied. After that, your animation is ready for sharing or further use.
NeROIC can reconstruct 3D models from photographs as an element of AI technology. NeROIC, created by a reputable tech company, has the potential to transform our perceptions and interactions with three-dimensional objects radically. NeROIC can create a 3D model of the user’s intended message using an approved image. The video-to-3D capabilities of NeROIC are comparable to its image-to-3D capability. This means a user can create an interactive 3D setting from a single video. Because of this, creating 3D scenes is faster and easier than ever.
The discipline of computer science concerned with creating 3D models from 2D photographs is advancing quickly. Deep learning-based techniques may be used to train point clouds and 3D meshes to depict real-world scenes better. A potential method, DPT Depth Estimation, employs a deep convolutional network to read depth data from a picture and generate a point cloud model of the 3D object. DPT Depth Estimation uses monocular photos to input a deep convolutional network pre-trained on data from various scenes and objects. Following data collection, the web will use the information to create a point cloud from which 3D models can be made. When compared to conventional techniques like stereo-matching and photometric stereo, DPT’s performance can surpass a human’s. Because of its fast inference time, DPT is a promising candidate for real-time 3D scene reconstruction.
RODIN is quickly becoming the go-to 2D-to-3D generator in artificial intelligence. The creation of 3D digital avatars is now drastically easier and faster than ever before, thanks to this breakthrough. Creating a convincing 3D character based on a person’s likeness has always been more difficult. RODIN is an artificial intelligence-driven technology that can generate convincing 3D avatars using private data such as a client’s photograph. Customers are immersed in the action by seeing these fabricated avatars in 360-degree views.
That part at least seems pretty clear beyond a shadow of a doubt: Generative Enhanced Multimodal Intelligent Network Interface.
The word “Gemini” comes from Latin and means “twins” in German. Some possible meanings in the context of Google’s AI system:
Gemini combines two components: Text and image processing. It is, in a sense, a “twin system.”
Gemini could refer to the „twins“ Sergey Brin and Larry Page, the founders of Google.
Astrology assigns communication strength and flexibility to the zodiac sign Gemini. Gemini as an AI assistant aims to adapt linguistically and situationally.
The name suggests a dual strength or ability. Gemini aims to unite Google’s text and image AI to outperform the competition.
As a twin system, Gemini combines different perspectives and approaches, similar to different human characters. So the name is both an allusion to the system’s integrative capabilities and a promising indication of Google’s ambitions with this AI product.
Why is Google superior?
To do that, you have to understand WHAT treasure trove of data Google is actually sitting on. Here are a few facts:
Google, through its various services such as Google Search, YouTube and others, has an enormous amount of data that is very useful for developing AI systems.
On YouTube alone, over 500 hours of video material are uploaded every day, according to Statista. The total video database is over 30 million hours of video. The subtitles and transcripts of these videos give Google a gigantic text dataset for training language models.
According to a report by ARK Invest, Google owns over 130 exabytes of data. For comparison, 1 exabyte is equal to 1 billion gigabytes. This means that the entire data set comprises more than 130,000,000,000,000,000 bytes of information.
Google Search accounts for a large part of this data. Google says it processes over 40,000 search queries per second. That’s over 3.5 trillion search queries per year. From these queries and the clicked results, Google gains further insights.
Overall, it shows that Google has virtually inexhaustible data resources for AI research. Both the breadth of different types of data and the sheer volume should give Google a significant edge in the AI field.
Google – The Research Giant
In 2020, Google published over 1300 artificial intelligence research papers, according to the Papers with Code database. In 2021, Google increased the number of publications significantly again to over 2000 papers on AI and machine learning.
Topics included:
Computer Vision (image recognition)
Natural Language Processing (NLP)
Speech Recognition
reinforcement learning
Robotics
Multimodal AI
Recommender Systems
Applications in medicine
With over 3300 AI publications in 2020 and 2021, Google has greatly expanded its research output in artificial intelligence. The company is one of the most active players in this research field. This intensive work over the past few years is now being incorporated into the development of Gemini.
According to the AI publication database Papers with Code, Google published more than 1,500 artificial intelligence research papers in 2022 alone. That’s far more than other tech corporations like Meta or Microsoft.
This is a partial selection of Google’s most groundbreaking developments in AI in recent years. The list shows the enormous range of research from machine learning and computer vision to robotics and autonomous systems.
AlphaGo: Go game AI that defeated world champion Lee Sedol in 2016.
BERT (Bidirectional Encoder Representations from Transformers): breakthrough language model for NLP from 2018.
PaLM (Pathways Language Model): enormous language model with 540 billion parameters from 2022
PaLM-SayCan: variant of PaLM that can carry on human-like conversations
Imagen: image generation AI for realistic and creative images
MusicLM: AI for music composition and production
RLHF (Reinforcement Learning with Human Feedback): Reinforcement learning with human feedback
Model Based RL: reinforcement learning with explicit models of the environment
RobustFit: Robust neural network against data noise
T5: Text-to-text transfer transducer for various NLP tasks
ViT (Vision Transformer): Image recognition with Transformer architecture
WAYMO: Autonomous driving and robot cab service
ProteinFold: Protein structure prediction with Deep Learning
FLOOD: AI for flood prediction and prevention
SLIDE: pixel-level image segmentation
Switch Transformers: efficient architecture for very large transformers
MuZero: reinforcement learning without environmental model in games
Meena: conversational AI from 2020
DALL-E & DALL-E 2: text-to-image generation.
When you look at the sheer amount of data Google has collected over the years, it initially makes you dizzy. Over 500 hours of video footage are uploaded to YouTube every day. The total video database is over 30 million hours. Add to that countless search queries, texts, images and conversations. It’s an almost unimaginable amount of data.
Coupled with intensive research activity in the AI field, it adds up to enormous potential. In recent years, Google has produced groundbreaking innovations such as the BERT language model, the AlphaGo Go AI, and the DALL-E image generator. When you put all these puzzle pieces together, things take on almost frightening proportions.
Project: Google Gemini
With the new Gemini AI system, Google now seems to have bundled the essence of these years of data aggregation and research. If the company succeeds in combining all of its AI developments and treasure trove of data in this system, it would be a demonstration of the sheer power of innovation. It will be interesting to see whether Gemini can deliver on this promise. In any case, the expectations are huge – here what we know and what the rumors say:
Facts Google Gemini
There are already some facts from the Google Blog:
Gemini is supposed to be released this fall
Gemini combines text and image generation
Can create contextual images based on text generation
Has been trained with YouTube transcripts
Google lawyers are monitoring the training to avoid copyright issues
Gemini is said to have multiple modalities, e.g., text, image, audio, video
Sergey Brin is involved in development
Rumors
From Reddit and countless other sources on the web, there could be other features as well:
Gemini is said to be capable of AI image understanding and modification
Is said to combine text capabilities like GPT-4 with image generation
Has been developed from the ground up as a multimodal model
Could handle audio, video, 3D renderings, graphics, etc.
Shall learn with user interactions and thus become effective AGI
Architecture could enable lifelong learning
There are concerns about privacy and information leaks between users
Google Gemini and the (then new) AI market:
The AI market situation is likely to change significantly with the introduction of Google Gemini:
For OpenAI:
Strong new competitor for ChatGPT and DALL-E.
Google has significantly more resources and data
OpenAI could lose market share and come under pressure
For Anthropic:
Claude must stand up to Google Assistant with Gemini
Advantage due to focus on security and control
Risk of falling behind
For Microsoft:
Partnership with OpenAI important to compete with Google
Microsoft must further develop Azure AI services
Advantage due to strong cloud infrastructure
For others:
Startups could have a very hard time against Google
Consolidation in the market possible
Significantly higher innovation speed
Overall, competitive pressure in the AI market will increase sharply. With its resources, Google is in a very good starting position to take a leading role with Gemini. It will be more difficult for other providers to keep pace with Google. It remains to be seen whether the high expectations for Gemini are justified.
Google Gemini Conclusion
Google Gemini seems to be a very ambitious AI project that should give the company a competitive edge. The combination of different modalities in one model is new and could improve AI capabilities tremendously. However, there are still many unanswered questions regarding the specific capabilities and data security. The release this fall will show whether Google can deliver on its promise to outperform the competition. Much is still speculation, but expectations are high.
#ai #ki #google #gemini #text #image #multimodal
Artificial intelligence steps in to assist dementia patients with ‘AI Powered Smart Socks’
People suffering from dementia could live more independently thanks to a pair of AI-powered socks that can track everything from a patient’s heart rate to movement.
Called “SmartSocks,” the AI-powered apparel was created in partnership between the University of Exeter and researchers at the start-up company Milbotix, according to SWNS. The socks can monitor a patient’s heart rate, sweat levels and motion to prevent falls while also promoting independence for those with dementia.
“I came up with the idea for SmartSocks while volunteering in a dementia care home,” SmartSocks creator Zeke Steer, CEO of Milbotix, told SWNS. “The current product is the result of extensive research, consultation and development.”
Steer’s great-grandmother suffered from dementia, which also helped spark the creation of the socks.
“The foot is actually a great place to collect data about stress, and socks are a familiar piece of clothing that people wear every day; our research shows that socks can accurately recognize signs of stress, which could really help not just those with dementia but their caregivers, too,” Steer, who has a background in robotics and AI, told SWNS.
The socks send the data collected from the patient to an app, which flags caregivers when the patient appears to be in distress. The warning could prevent falls and even tragedies as caregivers can respond to a patient before their stress escalates.
“I think the idea of SmartSocks is an excellent way forward to help detect when a person is starting to feel anxious or fearful,” said Margot Whittaker, director of nursing and compliance at Southern Healthcare in the U.K.
A handful of care homes overseen by Southern Healthcare, including The Old Rectory in Exeter, are already testing the tech-powered socks on patients, who report they are happy with how easy the socks are to use.
“Anything that’s simple and easy to do, and is improving our look at life as a whole, I’m happy with,” dementia patient John Piper, 83, told the BBC.
The socks do not need to be recharged, according to Milbotix’s website, and can be machine washed.
There are other products on the market that can also track a dementia patient’s heart rate or sweat levels, but they often come in the form of wristbands and watches, which can pose issues to those with dementia.
“Wearable devices are fast becoming an important way of monitoring health and activity,” Imperial College London’s Health and Social Care Lead Sarah Daniels told SWNS. “At our center, we have been trialing a range of wristbands and watches. However, these devices present a number of challenges for older adults and people affected by dementia.”
Daniels said wristbands or watches often don’t hold long charges and are taken off by patients and then lost.
“SmartSocks offer a new and promising alternative, which could avoid many of these issues,” Daniels said.
The University of Exeter is investigating how beneficial the socks are for dementia patients.
Artificial intelligence platforms are revamping health care across many disciplines, including another U.K.-based system called CognoSpeak, which can monitor speech patterns in a bid to detect early signs of dementia or Alzheimer’s.
U.K.-based start-up SmartSocks has developed hosiery that can monitor a dementia patient’s heart rate, motion and sweat levels with AI and alert caregivers to potential problems.
GPT-4 Code Interpreter can enhance math skills with code-based self-verification – OpenAI’s GPT-4 Code Interpreter’s remarkable performance in math datasets is largely attributed to its step-by-step code generation and dynamic solution refinement based on code execution outcomes. Expanding on this understanding, new research has introduced the innovative explicit code-based self-verification (CSV) prompt, which leverages GPT4-Code’s advanced code generation mechanism. This prompt guides the model to verify the answer and then reevaluate its solution with code. – The approach achieves an impressive accuracy of 84.32% on the MATH dataset, significantly outperforming the base GPT4-Code and previous state-of-the-art methods.
AI just reconstructed a Pink Floyd song from brain activity, and it sounds shockingly clear – Neuroscientists recorded electrical activity from areas of the brain as patients listened to the Pink Floyd song “Another Brick in the Wall, Part 1.” Using AI software, they were able to reconstruct the song from the brain recordings. This is the first time a song has been reconstructed from intracranial electroencephalography recordings.
Saudi Arabia and UAE join the race for scarce Nvidia chips – Saudi Arabia has purchased at least 3,000 of Nvidia’s H100 chips at $40,000 apiece, while UAE has ordered a fresh batch of semiconductors to power its LLM. This signals their intention to become major players in AI.
OpenAI acquires Global Illumination to work on core products, including ChatGPT – Its team leverages AI to build creative tools, infrastructure, and digital experiences. It previously designed and built products early on at Instagram and Facebook and has made significant contributions at YouTube, Google, Pixar, Riot Games, and other notable companies.
McKinsey unveils its own generative AI tool for employees: Lilli – It is a chat application for employees designed that serves up information, insights, data, plans, and even recommends the most applicable internal experts for consulting projects, all based on 100K+ documents and interview transcripts.
Opera’s iOS web browser will now include Aria – The AI assistant, Aria, is Opera’s browser AI product built in collaboration with OpenAI, integrated directly into the web browser, and free for all users.
Adobe Express with AI Firefly app is available worldwide – The web app is now out of beta and can be used free of charge in web browsers.
The Associated Press releases guidelines for Generative AI to its journalists
UK is using AI road safety cameras to detect potential driver offenses in passing vehicles
The founder of Centricity, a data analytics firm using AI, is indicted for defrauding investors by manipulating financial data.
Leaders with a Montana digital academy say bringing artificial intelligence to high schools is an opportunity to embrace the future.
Google said to be testing new life coach AI for providing helpful advice to people.
Alibaba Cloud MagicBuild Community has launched the digital human video generation tool called LivePortrait. It can generate digital human videos from photos, text, or voice, which can be applied in scenarios such as live broadcasting and corporate marketing.
Are physical SIMs about to be a thing of the past? Jump into the latest, and discover eSIMs #sponsored
Latest Sport Football Soccer News and Trends on August 17th, 2023
Atletico Madrid takes the alleged ‘threat’ of João Félix terminating his contract as a joke. If he wants to do so, he will have to pay his €350m release clause. Link
BREAKING: Theo Walcott is set to retire. Walcott, 34, who left Southampton at the end of last season, scored more than 100 goals for Arsenal and won 47 England caps. Link
From epic open-world adventures to mind-bending puzzles, these are the best iOS games to play on the upcoming iPhone 15.
Unraveling August 2023: August 16th, 2023
Latest AI News and Trends on August 16th, 2023
GPT-4 to replace content moderators
OpenAI aims to use its GPT-4 to solve the challenge of content moderation at scale. Also, they already used GPT-4 to develop and refine their own content policies. It provides three major benefits: consistent judgments, faster policy development, and improved worker well-being. However, perfect content moderation remains elusive, as both humans and machines make mistakes, particularly in handling misleading or aggressive content.
GPT-4 can interpret complex policy documentation and adapt instantly to updates, reducing the cycle from months to hours. This AI-assisted approach offers a positive future for digital platforms, where AI can help moderate online traffic and relieve the burden on human moderators.
Why does this matter?
GPT-4 can alleviate content moderation challenges and improve the efficiency and effectiveness of content moderation. This could be a solution for platforms like Facebook and Twitter, who’ve been grappling with content moderation for ages. OpenAI’s this approach could also appeal to smaller companies lacking resources.
Shepherd is a language model designed to critique and improve the outputs of other language models. It uses a high-quality feedback dataset to identify errors and provide suggestions for refinement. Despite its smaller size, Shepherd’s critiques are either equivalent or preferred to those from larger models like ChatGPT. In evaluations against competitive alternatives, Shepherd achieves a win rate of 53-87% compared to GPT-4.
Shepherd outperforms other models in human evaluation and is on par with ChatGPT. Shepherd offers a practical and valuable tool for enhancing language model generation.
Why does this matter?
Despite Shepherd’s smaller size, its critiques match or surpass those of larger models like ChatGPT, with a win rate of 53-87% agai