Decoding GPTs & LLMs: Training, Memory & Advanced Architectures Explained

Decoding GPTs & LLMs: Training, Memory & Advanced Architectures Explained

Unlock the secrets of GPTs and Large Language Models (LLMs) in our comprehensive guide!

Listen here

Decoding GPTs & LLMs: Training, Memory & Advanced Architectures Explained
Decoding GPTs & LLMs: Training, Memory & Advanced Architectures Explained

🤖🚀 Dive deep into the world of AI as we explore ‘GPTs and LLMs: Pre-Training, Fine-Tuning, Memory, and More!’ Understand the intricacies of how these AI models learn through pre-training and fine-tuning, their operational scope within a context window, and the intriguing aspect of their lack of long-term memory.

🧠 In this article, we demystify:

  • Pre-Training & Fine-Tuning Methods: Learn how GPTs and LLMs are trained on vast datasets to grasp language patterns and how fine-tuning tailors them for specific tasks.
  • Context Window in AI: Explore the concept of the context window, which acts as a short-term memory for LLMs, influencing how they process and respond to information.
  • Lack of Long-Term Memory: Understand the limitations of GPTs and LLMs in retaining information over extended periods and how this impacts their functionality.
  • Database-Querying Architectures: Discover how some advanced AI models interact with external databases to enhance information retrieval and processing.
  • PDF Apps & Real-Time Fine-Tuning

Drop your questions and thoughts in the comments below and let’s discuss the future of AI! #GPTsExplained #LLMs #AITraining #MachineLearning #AIContextWindow #AILongTermMemory #AIDatabases #PDFAppsAI”

Subscribe for weekly updates and deep dives into artificial intelligence innovations.

✅ Don’t forget to Like, Comment, and Share this video to support our content.

📌 Check out our playlist for more AI insights

📖 Read along with the podcast below:

Welcome to AI Unraveled, the podcast that demystifies frequently asked questions on artificial intelligence and keeps you up to date with the latest AI trends. Join us as we delve into groundbreaking research, innovative applications, and emerging technologies that are pushing the boundaries of AI. From the latest trends in ChatGPT and the recent merger of Google Brain and DeepMind, to the exciting developments in generative AI, we’ve got you covered with a comprehensive update on the ever-evolving AI landscape. In today’s episode, we’ll cover GPTs and LLMs, their pre-training and fine-tuning methods, their context window and lack of long-term memory, architectures that query databases, PDF app’s use of near-realtime fine-tuning, and the book “AI Unraveled” which answers FAQs about AI.

GPTs, or Generative Pre-trained Transformers, work by being trained on a large amount of text data and then using that training to generate output based on input. So, when you give a GPT a specific input, it will produce the best matching output based on its training.

Pass the AWS Certified Machine Learning Specialty Exam with Flying Colors: Master Data Engineering, Exploratory Data Analysis, Modeling, Machine Learning Implementation, Operations, and NLP with 3 Practice Exams. Get the MLS-C01 Practice Exam book Now!

The way GPTs do this is by processing the input token by token, without actually understanding the entire output. It simply recognizes that certain tokens are often followed by certain other tokens based on its training. This knowledge is gained during the training process, where the language model (LLM) is fed a large number of embeddings, which can be thought of as its “knowledge.”

After the training stage, a LLM can be fine-tuned to improve its accuracy for a particular domain. This is done by providing it with domain-specific labeled data and modifying its parameters to match the desired accuracy on that data.

Now, let’s talk about “memory” in these models. LLMs do not have a long-term memory in the same way humans do. If you were to tell an LLM that you have a 6-year-old son, it wouldn’t retain that information like a human would. However, these models can still answer related follow-up questions in a conversation.

For example, if you ask the model to tell you a story and then ask it to make the story shorter, it can generate a shorter version of the story. This is possible because the previous Q&A is passed along in the context window of the conversation. The context window keeps track of the conversation history, allowing the model to maintain some context and generate appropriate responses.

As the conversation continues, the context window and the number of tokens required will keep growing. This can become a challenge, as there are limitations on the maximum length of input that the model can handle. If a conversation becomes too long, the model may start truncating or forgetting earlier parts of the conversation.

Regarding architectures and databases, there are some models that may query a database before providing an answer. For example, a model could be designed to run a database query like “select * from user_history” to retrieve relevant information before generating a response. This is one way vector databases can be used in the context of these models.

There are also architectures where the model undergoes near-realtime fine-tuning when a chat begins. This means that the model is fine-tuned on specific data related to the chat session itself, which helps it generate more context-aware responses. This is similar to how “speak with your PDF” apps work, where the model is trained on specific PDF content to provide relevant responses.

In summary, GPTs and LLMs work by being pre-trained on a large amount of text data and then using that training to generate output based on input. They do this token by token, without truly understanding the complete output. LLMs can be fine-tuned to improve accuracy for specific domains by providing them with domain-specific labeled data. While LLMs don’t have long-term memory like humans, they can still generate responses in a conversation by using the context window to keep track of the conversation history. Some architectures may query databases before generating responses, and others may undergo near-realtime fine-tuning to provide more context-aware answers.

GPTs and Large Language Models (LLMs) are fascinating tools that have revolutionized natural language processing. It seems like you have a good grasp of how these models function, but I’ll take a moment to provide some clarification and expand on a few points for a more comprehensive understanding.

When it comes to GPTs and LLMs, pre-training and token prediction play a crucial role. During the pre-training phase, these models are exposed to massive amounts of text data. This helps them learn to predict the next token (word or part of a word) in a sequence based on the statistical likelihood of that token following the given context. It’s important to note that while the model can recognize patterns in language use, it doesn’t truly “understand” the text in a human sense.

During the training process, the model becomes familiar with these large datasets and learns embeddings. Embeddings are representations of tokens in a high-dimensional space, and they capture relationships and context around each token. These embeddings allow the model to generate coherent and contextually appropriate responses.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Gemini, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

However, pre-training is just the beginning. Fine-tuning is a subsequent step that tailors the model to specific domains or tasks. It involves training the model further on a smaller, domain-specific dataset. This process adjusts the model’s parameters, enabling it to generate responses that are more relevant to the specialized domain.

Now, let’s discuss memory and the context window. LLMs like GPT do not possess long-term memory in the same way humans do. Instead, they operate within what we call a context window. The context window determines the amount of text (measured in tokens) that the model can consider when making predictions. It provides the model with a form of “short-term memory.”

For follow-up questions, the model relies on this context window. So, when you ask a follow-up question, the model factors in the previous interaction (the original story and the request to shorten it) within its context window. It then generates a response based on that context. However, it’s crucial to note that the context window has a fixed size, which means it can only hold a certain number of tokens. If the conversation exceeds this limit, the oldest tokens are discarded, and the model loses track of that part of the dialogue.

It’s also worth mentioning that there is no real-time fine-tuning happening with each interaction. The model responds based on its pre-training and any fine-tuning that occurred prior to its deployment. This means that the model does not learn or adapt during real-time conversation but rather relies on the knowledge it has gained from pre-training and fine-tuning.

While standard LLMs like GPT do not typically utilize external memory systems or databases, some advanced models and applications may incorporate these features. External memory systems can store information beyond the limits of the context window. However, it’s important to understand that these features are not inherent to the base LLM architecture like GPT. In some systems, vector databases might be used to enhance the retrieval of relevant information based on queries, but this is separate from the internal processing of the LLM.

In relation to the “speak with your PDF” applications you mentioned, they generally employ a combination of text extraction and LLMs. The purpose is to interpret and respond to queries about the content of a PDF. These applications do not engage in real-time fine-tuning, but instead use the existing capabilities of the model to interpret and interact with the newly extracted text.

To summarize, LLMs like GPT operate within a context window and utilize patterns learned during pre-training and fine-tuning to generate responses. They do not possess long-term memory or real-time learning capabilities during interactions, but they can handle follow-up questions within the confines of their context window. It’s important to remember that while some advanced implementations might leverage external memory or databases, these features are not inherently built into the foundational architecture of the standard LLM.

Are you ready to dive into the fascinating world of artificial intelligence? Well, I’ve got just the thing for you! It’s an incredible book called “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence.” Trust me, this book is an absolute gem!

Now, you might be wondering where you can get your hands on this treasure trove of knowledge. Look no further, my friend. You can find “AI Unraveled” at popular online platforms like Etsy, Shopify, Apple, Google, and of course, our old faithful, Amazon.

This book is a must-have for anyone eager to expand their understanding of AI. It takes those complicated concepts and breaks them down into easily digestible chunks. No more scratching your head in confusion or getting lost in a sea of technical terms. With “AI Unraveled,” you’ll gain a clear and concise understanding of artificial intelligence.

So, if you’re ready to embark on this incredible journey of unraveling the mysteries of AI, go ahead and grab your copy of “AI Unraveled” today. Trust me, you won’t regret it!

On today’s episode, we explored the power of GPTs and LLMs, discussing their ability to generate outputs, be fine-tuned for specific domains, and utilize a context window for related follow-up questions. We also learned about their limitations in terms of long-term memory and real-time updates. Lastly, we shared information about the book “AI Unraveled,” which provides valuable insights into the world of artificial intelligence. Join us next time on AI Unraveled as we continue to demystify frequently asked questions on artificial intelligence and bring you the latest trends in AI, including ChatGPT advancements and the exciting collaboration between Google Brain and DeepMind. Stay informed, stay curious, and don’t forget to subscribe for more!

Mastering GPT-4: Simplified Guide for Everyday Users

📢 Advertise with us and Sponsorship Opportunities

Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book “AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence,” available at Etsy, Shopify, Apple, Google, or Amazon

AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, AI Podcast)
AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, AI Podcast)

The Future of Generative AI: From Art to Reality Shaping

  • One-Minute Daily AI News 9/19/2024
    by /u/Excellent-Target-847 (Artificial Intelligence) on September 19, 2024 at 6:22 am

    YouTube announces new generative AI features for video, music and inspiration.[1] LinkedIn is training AI models on your data.[2] Apple Intelligence will support German, Italian, Korean, Portuguese, and Vietnamese in 2025.[3] Billionaire tech CEO says bosses shouldn’t ‘BS’ employees about the impact AI will have on jobs.[4] Sources: [1] https://www.nbcnews.com/tech/tech-news/youtube-ai-generative-features-date-announcement-rcna171661 [2] https://www.theverge.com/2024/9/18/24248471/linkedin-ai-training-user-accounts-data-opt-in [3] https://techcrunch.com/2024/09/18/apple-intelligence-will-support-german-italian-korean-portuguese-and-vietnamese-in-2025/ [4] https://www.cnbc.com/2024/09/19/billionaire-tech-ceo-bosses-shouldnt-bs-employees-about-ai-impact.html submitted by /u/Excellent-Target-847 [link] [comments]

  • Please recommend some best Bachelor's Thesis Idea for AI Student.
    by /u/umair1181gist (Artificial Intelligence) on September 19, 2024 at 5:50 am

    Hello Everyone, I hope you’re doing well. I’m currently a final-year BS AI student, and my 7th semester has just started. I’m reaching out because I’m actively looking for thesis ideas. Although my university's resources and guidance may be limited, I’m highly motivated to work on a thesis that tackles current problems using AI techniques. My goal is to eventually pursue an MS scholarship or a direct PhD program in the USA or another top university, so I’m aiming to develop a thesis that showcases my knowledge, dedication, and ability to solve real-world problems through AI. I’m open to any suggestions, whether they are software-based or involve hardware—I'm willing to invest in the necessary resources. If you have any ideas or suggestions, I would greatly appreciate your help! Thank you in advance. submitted by /u/umair1181gist [link] [comments]

  • GPT4 vs OpenAI-o1 outputs compared
    by /u/mehul_gupta1997 (Artificial Intelligence Gateway) on September 19, 2024 at 2:39 am

    OpenAI-o1, due to inclusion of Chain Of Thoughts by default, is generating some great results, specially for logically complex tasks like Advanced maths, physics etc. Checkout how the Chain of Thought output (where it's thinking on ChatGPT UI) looks like (some samples are shared by OpenAI) and compare it's results with GPT4 : https://youtu.be/yXjmFK79QSk submitted by /u/mehul_gupta1997 [link] [comments]

  • Warning: Aragon AI
    by /u/Rude-Garlic (Artificial Intelligence Gateway) on September 19, 2024 at 2:26 am

    Hey there, I wanted to give a warning/heads up to anyone exploring AI generated headshots--Specifically with Aragon AI. I received an unauthorized $35 charge from this company on my credit card. I have explored AI headshots in the past, so I wasn't too worried at first. I contacted them to see if they could help me access my account (if I had one) and cancel a recurring subscription. This is where things get weird. After providing every possible email address, my full name, and the last 4 of my card, they say they have no transaction from me. I gave them all the information from chance and they say the charge isn't from them. They reccomended I dispute the charge, and said disputing it would allow them to "track my payment." Of course, I reported this as fraud not a dispute. It is extremely weird that they have no history of me but apparently have my information and are charging my card. If this really is a bad actor, they should let their security team know ASAP because someone is using their name and likeness to make fraudulent charges. I just wanted to warn everyone to be careful of this website, as it seems they are either mishandling data OR there is a bad actor pretending to be them. Please don't come at me. I just want to let you know. Make your own decisions. I really thought it could have just been me forgetting to cancel a subscription, but this is really suspicious. Edit: typo submitted by /u/Rude-Garlic [link] [comments]

  • Face swap AI Online: DeepFake Face 100% Similarity
    by /u/FamiliarLimit4045 (Artificial Intelligence Gateway) on September 19, 2024 at 1:33 am

    In 2024, the development of AI technology has made significant strides. In addition to the rapid advancements in large conversational models and image generation models, the pace of development in face swap technology has also been astonishing. Commonly referred to as "deepfake," this technology has gained popularity because of its ability to swap faces in photos and videos with incredible precision. Deepfake technology can achieve a level of facial similarity so perfect that it is nearly indistinguishable from a real face. Unlike traditional photo and video editing, AI face swap does not rely on special expertise or skills. Instead, it uses AI to recognize, learn, and map facial features, expressions, and movements, allowing one person's face to be replaced by another's. Early deepfake technology originated from simple facial morphing experiments, but it had obvious flaws, such as mismatched facial expressions and incorrect lighting. However, with the support of deep learning and neural network technology, deepfake technology has significantly improved. It is now capable of achieving nearly perfect face swaps, staying consistent with natural head movements, facial expressions, and even subtle micro-expressions. Face Swap 100% Similarity: Sharing Tips Achieving 100% facial similarity through deepfake technology involves several key steps. Following these steps strictly will help you achieve the desired results. 1. Choose the Best Face Swap Tool This is crucial, as it determines your efficiency, results, and time. We recommend prioritizing browser-based applications that don’t require installation, as software that needs to be installed often demands professional editing skills and may not meet your expectations. Instead, consider using one of the most popular AI deepfake face swap online tools. Using an online face swap tool offers several benefits: It's ready to use without the need to download or install, and all face swap records are automatically cleared. It requires no experience or special skills, making it simple and easy to use with a user-friendly interface. The results are equally impressive, achieving nearly 100% similarity, and the process is very quick. After testing and verifying tools that meet these criteria, I recommend the most popular and effective online tool currently available: AIFaceswap, which is widely recognized as the best-performing tool. Key Features: ● AI-Powered Face Swap for Photos: Upload your image and effortlessly swap faces. ● Multi-person Face Swap: Instantly swap faces for several people within the same image. ● Batch Face Swap with AI: Apply face swaps to multiple images simultaneously, improving efficiency. ● GIF and Video Face Swapping with AI: Easily create face-swapped GIFs or swap faces in dynamic video clips. Benefits: The main benefit is that it’s completely free, unlike other AI face swap tools that often restrict features. Additionally, it enables precise frame extraction, ensuring top-quality results for video swaps. Finally, it has an intuitive and easy-to-use interface, delivering excellent face swap quality. 2. Prepare Clear Face Resources This mainly includes the original face images, GIFs, and videos, as well as target face resources. When selecting face materials, try to ensure that the expressions, clarity, and size of the faces are as similar as possible. This helps the AI model quickly recognize and understand the facial structure. Once understood, the AI will use facial matching algorithms to identify specific features like the eyes, nose, and mouth on both the original and target faces. Deep learning enables the AI to capture finer details, such as skin textures, lighting effects, and shadows, allowing for a seamless transition between the two faces. Face Swap Online: How to Achieve? Using an online AI face swap tool to achieve 100% similarity typically follows this simple workflow: 1. Upload Photos The user begins by uploading the source and target face photos they wish to swap. It’s crucial that both faces have similar angles and resolutions for the best results. Some platforms offer large template libraries to expedite and enhance the quality of the face swap. 2. Perform the Face Swap Once the photos are uploaded, the AI scans the images to detect facial features. This process is fully automated and requires no user input, as the system ensures proper face alignment and a natural swap. The AI then replaces the face in the source image with the face from the target image. If the tool supports advanced settings, you can fine-tune elements like skin tone, lighting, and facial expressions to create a seamless blend. 3. Download and Share After the swap is complete, you can download the final image and share it on social media or use it for other creative projects. Conclusion As AI continues to advance, the future of face swap technology will reach new heights. These improvements could lead to more immersive digital experiences in entertainment, virtual reality, and social media. However, these technological advancements also bring significant challenges. Ensuring the ethical use of deepfake technology is crucial, as misuse could result in privacy violations, misinformation, or fraud. Therefore, it is necessary to establish legal frameworks and safeguards to balance innovation with responsibility. From entertainment to everyday user experiences, deepfake face swapping has opened up new possibilities for creative expression. Yet, as this technology becomes more widespread, we must remain vigilant about its ethical implications, ensuring it is used responsibly for the benefit of all. submitted by /u/FamiliarLimit4045 [link] [comments]

  • Google's new AI tool for podcasts just transformed how writers distribute their content
    by /u/NextgenAITrading (Artificial Intelligence Gateway) on September 19, 2024 at 1:16 am

    I am a prolific writer. I try to write 3+ articles per week. It's helped me a ton with my communication skills, writing technical design docs at work, and overall sharing the crazy ideas I have in my head. Until now, there was no way for me to repurpose the articles that I wrote. I've tried text-to-video tools in the past, but they're all hot garbage. Google's new NotebookLM literally transformed how how us writers can distribute our content. It generates an extremely realistic and interesting podcast between two people. Honestly, I would listen to it for fun, and I don't think it sounds AI-Generated. I then combine it with Headliner, so I can convert my audio to a video, and post it on platforms like YouTube and TikTok. Sharing my first creation with this group. I converted this article to the following videos: Spotify Link YouTube link What do y'all think? Is this a game-changer or am I eating glue? submitted by /u/NextgenAITrading [link] [comments]

  • The ups and downs of building an app with Replit Agent
    by /u/TheLostWanderer47 (Artificial Intelligence Gateway) on September 19, 2024 at 1:12 am

    Check out the pros and cons of building an app with Replit Agent: https://differ.blog/p/ai-agents-what-i-learned-after-a-week-8b416f submitted by /u/TheLostWanderer47 [link] [comments]

  • Using Ai tools to help with reports?
    by /u/gaxaxy (Artificial Intelligence Gateway) on September 19, 2024 at 1:07 am

    Hi everyone, I work within the disability sector and I'm seeking help to automate a repetitive and time-consuming task I currently handle manually. Here's an overview of my workflow: Current Workflow: Daily Shift Notes: App Used: Maslow Data Collected: Daily shift notes(multiude of daily activities etc) including time frames, Format: excel (.xlsx)? Therapy Reports: App Used: N/A Data Collected: Patient therapy sessions, progress notes, treatment plans, etc. Format: Manual entry Report Compilation: Template: A standardized report template required by the government. Process: Manually transferring data from both apps into the template. Submission: Email submission. Challenges: Time-Consuming: Manually copying data from two separate apps into the report template takes several hours each week. Scalability: As the volume of data increases, the manual process becomes less sustainable. What I'm Looking For: Is it possible to automate this process? Specifically, I'm interested in solutions that can: Integrate Data: Pull data automatically from both the Daily Shift Notes app and the Therapy Reports (Not sure if possible, but manually copying is fine) Populate Template: Insert the collected data into the standardized government report template without manual intervention (Doesnt have to be fully automated if it gets more complex the more variables) Additional Information: Access to Apps: APIs: No API access Export Options: Excel Report Template: Format: Word (can be changed to suit needs?) Structure: Template to what im currently using https://imgur.com/a/8DzAPTo Questions: Is there existing software or tools that can help automate this multi-app data aggregation and report generation? Would a custom script or a low-code/no-code solution be more appropriate for this task? Any recommendations for services or professionals who specialize in this type of workflow automation? Thank you! submitted by /u/gaxaxy [link] [comments]

  • The newly launched OpenAI o1 and Chat GPT 4o go head-to-head in this in-depth analysis of their features and differences.
    by /u/Bernard_L (Artificial Intelligence Gateway) on September 19, 2024 at 12:49 am

    https://medium.com/@bernardloki/openai-o1-vs-chat-gpt-4o-exploring-key-features-and-differences-c411671b239f submitted by /u/Bernard_L [link] [comments]

  • Any ***FREE*** AI sites where you can make your own Voice into a TTS?
    by /u/Ymmipphard (Artificial Intelligence Gateway) on September 19, 2024 at 12:41 am

    As title says. I search for free sites online and it only takes me to paid, premium, purchases and poop. submitted by /u/Ymmipphard [link] [comments]



Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)