AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

2022 AWS Cloud Practitioner Exam Preparation

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900

  • New courses and updates from AWS Training and Certification in May 2022
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 24, 2022 at 4:25 pm

    Check out news and updates from AWS Training and Certification for cloud learners, AWS customers, and AWS Partners for May 2022. New digital courses focus on cloud essentials, networking basics, compute, container management, and audit activities. Classroom training also is available for learning about securing workloads on the AWS Cloud and building a data warehousing solution, and there are certification updates for Advanced Networking – Specialty, Solutions Architect – Professional, and SAP on AWS – Specialty . . .

  • Public preview: Azure Communication Services APIs in US Government cloud
    by Azure service updates on May 24, 2022 at 4:00 pm

    Use Azure Communication Services APIs for voice, video, and messaging in US Government cloud.

  • Microsoft AZ-800 (Administering Windows Server Hybrid Core Infrastructure)
    by /u/No-Energy2718 (Microsoft Azure Certifications) on May 24, 2022 at 11:41 am

    submitted by /u/No-Energy2718 [link] [comments]

  • SC-100 Study Cram
    by /u/JohnSavill (Microsoft Azure Certifications) on May 24, 2022 at 11:03 am

    submitted by /u/JohnSavill [link] [comments]

  • AZ-104 Passed!
    by /u/Walkipedia00 (Microsoft Azure Certifications) on May 23, 2022 at 9:15 pm

    Passed the test last night, barely got there but a pass is a pass! I used Savill Videos and SkillCertPro tests mainly. Also did a bit on Udemy with Scott Duffy. Test was a lot like SkillCertPro, but all of Savill's videos really give you a good understanding of the material. I probably only studied 20-25 hours total so I would recommend more than that before taking the exam. Good luck to everyone who hasn't taken it yet!! submitted by /u/Walkipedia00 [link] [comments]

  • Az-700 Objectives notion template
    by /u/benj_crew (Microsoft Azure Certifications) on May 23, 2022 at 8:10 pm

    Hi all, Whenever I sit an exam, I like to create a little checklist of all the exam objectives. It works really well to tick them off after the bulk of my study is done. That way I can focus on what I don't know. Anyway, people seemed to like the one I made for Az-104: https://www.reddit.com/r/AzureCertification/comments/twqicc/az104_objectives_notion_template/ ​ So here is my new checklist for Az-700: https://fork-psychology-8fe.notion.site/3daf733b15954ba6a298f8ead828fc2b?v=697c9e170c8747f9906da8b90aaa712f Just click 'duplicate' in the top-right corner to copy it to your own Notion. ​ Best of luck to you all. submitted by /u/benj_crew [link] [comments]

  • Passed Az-104
    by /u/InsaneMethod (Microsoft Azure Certifications) on May 23, 2022 at 5:30 pm

    Whew! That was a tough one. Used Savill videos, Microsoft learn, MeasureUp, and my hands on experience with Azure at the MSP I work at for the past 6 months. About 50 hours of studying in total. Will probably go for the AZ-700 next as networking is my strong suite. I wish everyone luck on their exam try’s! submitted by /u/InsaneMethod [link] [comments]

  • Azure certification path for someone with no experience coding?
    by /u/FriendlyBrownMan (Microsoft Azure Certifications) on May 23, 2022 at 4:35 pm

    I work in helpdesk currently. My job is considering moving from on Prem to cloud (Azure). I have already decided to start with the Azure fundamentals cert (my company is offering to pay for it if I pass, as they would with any certifications) After I pass this exam, what should be my next certification? I am looking to move out of help desk and move onto a role where I can help manage our cloud infrastructure. Last year I passed the AWS SAA, tried applying to jobs for 6 months with no luck because of experience. In this situation I’ll get real on the job experience with Azure so I’m going for it. Please help me find the right path. Thanks! submitted by /u/FriendlyBrownMan [link] [comments]

  • passed az900 yesterday.
    by /u/DontDoIt2121 (Microsoft Azure Certifications) on May 23, 2022 at 3:24 pm

    thinking of doing az500 next since i watched the lectures last week during ms security week and have access to the labs. any reason why i should study az104 1st or just go for az500 next weekend? will be on to az104 in the next month regardless and hopefully az305 the following month. submitted by /u/DontDoIt2121 [link] [comments]

  • Renewal dates
    by /u/mrM1975 (Microsoft Azure Certifications) on May 23, 2022 at 7:20 am

    I received an Az cert renewal from Microsoft today. The cert expires Nov22. If I took the exam and passed today (May22) when would my next renewal be due? Would it be May24 or Nov24? I can't find anything on here about it: https://docs.microsoft.com/en-us/learn/certifications/renew-your-microsoft-certification TIA submitted by /u/mrM1975 [link] [comments]

  • New to Azure - Mentor or guide needed
    by /u/RefrigeratorWorried3 (Microsoft Azure Certifications) on May 23, 2022 at 5:40 am

    i am totally new to Azure , need some tips on where to start and how to proceed with certification. i have a moderate level programming experience. submitted by /u/RefrigeratorWorried3 [link] [comments]

  • AZ900 free practice tests?
    by /u/greyskull57 (Microsoft Azure Certifications) on May 22, 2022 at 6:38 pm

    Hi Guys, Is there any way to get paid AZ900 practice tests for free Atleast I don't feel like putting money for AZ900, just wanna try them and go for AZ104 paid. submitted by /u/greyskull57 [link] [comments]

  • I have passed the MS-900. Where Do I Go From Here for SOC route.
    by /u/Moynzy (Microsoft Azure Certifications) on May 22, 2022 at 4:39 pm

    Hello all, ​ My workplace uses M365 and I was asked to pass the MS-900. Where do I go from here? There are internal positions such as level 2 Azure Engineers, or SOC Analyst roles. I want to follow the SOC route, so do I study for the SC-900 or do I study the AZ-900? ​ Thanks for the support. Edit: thanks all Booked the SC-900 for June 11th and the Az-900 July 30th. Wish me luck submitted by /u/Moynzy [link] [comments]

  • Recommended Training for the MD-100
    by /u/creatureshock (Microsoft Azure Certifications) on May 22, 2022 at 3:43 pm

    Hello all. Not sure this is the right place, but I'm taking a shot in the dark. I have to take the MD-100 for a new job and I'm having a hard time finding decent training for it. I've used the free training Microsoft and the John Christopher training on Udemy, but I'm failing the Measureup practice tests that Microsoft/PearsonVUE sold along with the test. The practice tests keep asking questions that aren't covered in any of the training. Does anyone have any recommendations? submitted by /u/creatureshock [link] [comments]

  • PEARSON VUE RUINED MY EXAM
    by /u/CarefulArtichoke7768 (Microsoft Azure Certifications) on May 22, 2022 at 11:50 am

    Been revising for 2 months for the SC-200 exam, given up weekends to sudy.... Exam day came and my exam didnt load in properly, the questions were there but no where to fill in my answers. I spoke to 4 different customer service people who tried restting my exam and it didnt work.... Ive now been told they need to reveiw the case and it will can take a week for them to get back to me..... Really have theown off all the prep i have done and feel massively deflated now, the customer service team that were speaking to me were a joke, took forever to get a single respond back from them. submitted by /u/CarefulArtichoke7768 [link] [comments]

  • Free CISSP, Security+, SC-900, AZ-900
    by /u/rj4511 (Microsoft Azure Certifications) on May 22, 2022 at 10:47 am

    submitted by /u/rj4511 [link] [comments]

  • AZ-204 PowerShell Commands
    by /u/IS2020 (Microsoft Azure Certifications) on May 22, 2022 at 3:31 am

    Preparing for the 204, and I noticed that the Microsoft learning paths use Azure CLI commands for the hands on portions. Is it enough to be fluent in CLI, or must you know the PowerShell equivalents for the exam? Thanks! submitted by /u/IS2020 [link] [comments]

  • How long did it take you to start learning a course to getting your certs?
    by /u/undertheinfluenceof (Microsoft Azure Certifications) on May 20, 2022 at 3:27 pm

    Just out of total curiosity, how long did it take any of you to begin studying for, say, AZ-900 to passing the final exam? (Does not have to be AZ-900, but any of your certs.) I am totally curious about time-frames and what a realistic expectation it should take. I also know timelines will vary depending on experience. submitted by /u/undertheinfluenceof [link] [comments]

  • How to create labs for AZ-104?
    by /u/Real_Lemon8789 (Microsoft Azure Certifications) on May 20, 2022 at 3:25 pm

    I have an Azure Developer tenant that includes Microsoft licensing, but does not include any subscription credits. I had opened a free Azure trial last year with a different tenant and never used it. So, the $200 subscription credit expired and got wasted in the first 30 days. Is there another way to get a 30 day trial of subscription credits and apply it to the developer tenant I have now or else create a separate tenant with $200 credit to use for AZ-104 test prep? I was also planning to use GitHub labs related to AZ-104, but I don’t understand how to use that. I went to the GitHub page and I just see links on the page to a bunch of files. I don’t see any explanation of what to do with all those file links. submitted by /u/Real_Lemon8789 [link] [comments]

  • Cert suggestions?
    by /u/SENDMEYOURROBOTDICKS (Microsoft Azure Certifications) on May 20, 2022 at 8:22 am

    Hey guys, I was wondering if anyone could suggest which certs i should try to tackle after getting AZ-900. A bit about myself: I have about 8 years of experience as a support engineer with mostly on-premises experience with Windows Servers, Microsoft Exchange. I've had some entry-level experience with networking and i haven't done much automation yet. I'm planning to fill that gap in and start using Powershell, Python and move towards a role that resembles an SRE. Is it worth getting the AZ-104 certification, or should i move directly towards AZ-204 and AZ-400? submitted by /u/SENDMEYOURROBOTDICKS [link] [comments]

  • 05.2022 Free Voucher or Discounts
    by /u/TrySmile (Microsoft Azure Certifications) on May 20, 2022 at 7:01 am

    Free Voucher: https://www.microsoft.com/en-us/cloudskillschallenge/build/registration/2022 ithttps://www.microsoft.com/en-rs/trainingdays?activetab=ms-training-days:primaryr4 Discounts: https://developer.microsoft.com/en-us/offers/30-days-to-learn- submitted by /u/TrySmile [link] [comments]

  • Passed AZ-900 at a Pearson Vue center
    by /u/gotopune (Microsoft Azure Certifications) on May 20, 2022 at 3:57 am

    Today was my first attempt at AZ-900 and I passed (scores 805). I took the test at a Pearson Vue center because it was under a mile away from my place, so I figured why not 😉 A few things to note- I prepared using the material on MS website. I also had access to ESI portal (from work), and I took the practice test there. I did not use any YouTube material. I don’t have a lot of hands on experience but in my prep test I noticed one question had a screenshot of the Azure portal and asked us to identify what one needs to select to perform an action. Therefore, I went through the Azure portal and studied the names of each of the icons. Sure enough I got 2 questions in the test, so that prep helped. I read here on this sub that the exam had changed in May. I didn’t know what changed and I can’t tell you if I noticed something. But there sure was an option to take a break. The condition specified that after we take a break, we’re not allowed to go back to any of the questions we’ve answered already. Hope this helps people looking to take the test soon. Please definitely go through the Microsoft learning portal and get familiar with some of the options on Azure portal. submitted by /u/gotopune [link] [comments]

  • AZ-104 Skillcertpro Question Confusion.
    by /u/Visual_Classic_7459 (Microsoft Azure Certifications) on May 20, 2022 at 12:59 am

    Hey guys, so I am currently studying for az-104 and i have come across what I think are conflicting questions but I think one of them is apart of a much larger question as it pertains to SLA availability and in one picture the question only has to do with the scale set where as the other you see 2 questions with one talking about a scale set and the other talking about an availability set. Please tell me if the correct answers are actually correct because I have done some research but cant find a concrete answer. Also the reason I say " I think one of them is apart of a much larger question" is because I have now taken the az-104 exam twice and I noticed that questions like this tend to be a two in one deal with dropdown boxes with the possible answers. Please tell me if I am wrong in any and everything I have said. Thanks. https://preview.redd.it/4u7av0c65j091.jpg?width=1186&format=pjpg&auto=webp&s=2ebab588a854e4ef01963a7ed64d503a3d7708d3 https://preview.redd.it/lex4s9c65j091.jpg?width=899&format=pjpg&auto=webp&s=4e7910d75423fed558730fe57bab102e836f2ebd submitted by /u/Visual_Classic_7459 [link] [comments]

  • Azure Terrafy- Azure’s best Terraform buddy
    by /u/ormamag (Microsoft Azure Certifications) on May 19, 2022 at 8:41 pm

    submitted by /u/ormamag [link] [comments]

  • Thoughts on A Cloud Guru for Azure?
    by /u/Drewskiii727 (Microsoft Azure Certifications) on May 19, 2022 at 5:27 pm

    They are having a 40% off sale for the year, would you recommend I get this? Trying to learn Azure and coming from a non-tech background. Hoping to make a career switch to tech in the future. submitted by /u/Drewskiii727 [link] [comments]

  • New Research shows Google Cloud Skill Badges build in-demand expertise
    by (Training & Certifications) on May 19, 2022 at 4:00 pm

    We live in a digital world, and the future of work is in the cloud. In fact, 61% of HR professionals believe hiring developers will be their biggest challenge in the years ahead.1During your personal cloud journey, it’s critical to build and validate your skills in order to evolve with the rapidly changing technology and business landscape.That is why we created skill badges - a micro-credential issued by Google Cloud to demonstrate your cloud competencies and your commitment to staying on top of the latest Google Cloud solutions and products. To better understand the value of skills badges to holders’ career goals, we commissioned a third-party research firm, Gallup, to conduct a global study on the impact of Google Cloud skill badges. Skill badge earners overwhelmingly gain value from and are satisfied with Google Cloud skill badges.Skill badge holders state that they feel well equipped with the variety of skills gained through skill badge attainment, that they are more confident in their cloud skills, are excited to promote their skills to their professional network, and are able to leverage skill badges to achieve future learning goals, including a Google Cloud certification. 87% agree skill badges provided real-world, hands-on cloud experience286% agree skill badges helped build their cloud competencies2 82% agree skill badges helped showcase growing cloud skills290% agree that skill badges helped them in their Google Cloud certification journey274% plan to complete a Google Cloud certification in the next six months2Join thousands of other learners and take your career to the next level with Google Cloud skill badges.To learn more, download the Google Cloud Skills Badge Impact Report at no cost.1. McKinsey Digital,Tech Talent Technotics: Ten new realities for finding, keeping, and developing talent , 20222. Gallup Study, sponsored by Google Cloud Learning: "Google Cloud Skill Badge Impact report", May 2022Related ArticleHow to prepare for — and ace — Google’s Associate Cloud Engineer examThe Cloud Engineer Learning Path is an effective way to prepare for the Associate.Read Article

  • AZ900!!!!
    by /u/Mildew69 (Microsoft Azure Certifications) on May 19, 2022 at 3:09 pm

    Give me your best advice, experience, and study suggestions for my AZ-900 test next Wednesday afternoon. I have 10+ years in IT and am familiar with the cloud, but not proficient. ​ Thanks yall. This r/ has been super helpful thus far. submitted by /u/Mildew69 [link] [comments]

  • Study group for AZ-104
    by /u/Dom_thedestroyer (Microsoft Azure Certifications) on May 19, 2022 at 11:56 am

    Looking for anyone who wants or are already studying for the 104 exam. I’m planning on taking it in June, so I’m looking for someone to bounce ideas of of. Lmk submitted by /u/Dom_thedestroyer [link] [comments]

  • Top five reasons AWS Partners should take AWS Training
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 16, 2022 at 4:27 pm

    Are you new to an Amazon Web Services (AWS) Partner business and the cloud? Not sure where to start your cloud learning journey? It may feel daunting but AWS offers Partner-exclusive courses to make it easier to understand cloud fundamentals. In fewer than 30 minutes, you can begin boosting your confidence and credibility with both customers and your organization . . .

  • When Artificial Intelligence becomes more than a passion
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 5, 2022 at 6:01 pm

    Learn how AWS Certifications can help you validate your knowledge and enhance your credibility. Dipayan Das updated his artificial intelligence (AI) skills with AWS Training and Certification. He shares the resources he used and the impact of his training, including his ability to add value to his organization and clients. . .

  • If you are looking for a Job relating to azure try r/AzureJobs
    by /u/whooyeah (Microsoft Azure Certifications) on May 5, 2022 at 10:41 am

    submitted by /u/whooyeah [link] [comments]

  • GCP Certification missing certificates
    by /u/ProtossforAiur (Google Cloud Platform Certification) on May 2, 2022 at 8:31 am

    These certifications are a scam. They will provide you with a link of the certificate after that they can remove the link whenever they want. If you get certified make sure you download in pdf. Google doesn't keep backup of certificates. Yes you heard that right.we asked a copy of certification which was because the link was not working they replied they couldn't submitted by /u/ProtossforAiur [link] [comments]

  • How we’re keeping up with the increasing demand for the Google Workspace Administrator role
    by (Training & Certifications) on April 29, 2022 at 4:00 pm

    We’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. To mark the moment, we sat down with Erik Geerdink from SADA to talk about how the Google Workspace Administrator role and demand for this skill set has changed over the years. Erik is a Deployment Engineer and Pod Lead. He holds a Professional Google Workspace Administrator Certificationand has worked with Google Workspace for more than six years.What was it like starting out as a Google Workspace Administrator?When I first started, I was doing Google Workspace Support as a Level 2 Administrator. At that time, there were fewer admin controls for Google Workspace. There were calendar issues, some mail routing issues, maybe a little bit of data loss prevention (DLP), but that was about it.About 5 years ago, I transferred into Google Deployment and really got to see all that went on with deploying Google Workspace and troubleshooting advanced issues. Since then, what you can accomplish in the admin console has really taken off. There’s still Gmail and Calendar configurations, but the security posture that Google offers now—they’ve really upped their game. The extent of DLP isn’t just Gmail and Drive anymore; it extends into Chat. And we’re doing a lot of Context-Aware Access to make sure users only have as much access as IT compliance allows in our deployments. Calendar interop, which allows users in different systems to see availability, has been a big area of focus as well.How has the Google Workspace Administrator role changed over the last few years? It used to be that you were a systems admin who also took care of the Google portion as well. But with Google Workspace often being the entry point to Google Cloud, we’ve had to become more knowledgeable about the platform as a whole. Now, we not only do training with Google Workspace admins for our projects, we also talk to their Google Cloud counterparts as well.Google Workspace is changing all the time, and the weekly updates that Google sends out are great. As an engineering team, every week on Wednesday, we review each Google Workspace update that’s come out to understand how they affect us, our clients, and our upcoming projects. There’s a lot to it. It’s not just a little admin role anymore. It’s a strategic technology role.What motivated you to get Google Cloud Certified?I spent the first 15 years of my career doing cold server room roles, and I knew I had to get cloudy. I wanted to work with Google, and it was a no-brainer given the organization’s reputation for innovation. I knew this certification exam was the one to get me in the door. The Professional Google Workspace Administrator certification was required to level up as an administrator and to make sure our business kept getting the most out of Google Workspace. How has the demand for certified Google Workspace Admins changed recently? Demand has absolutely gone up. We are growing so much, and we need more professionals with this certification. It’s required for all of our new hires. When I see a candidate that already has the certification, they go to the top of the list. I’ll skip all the other resumes to find someone who has this experience. We’re searching globally—not just in North America—to find the right people to fill this strategic role.Explore the new learning pathIn order to keep up with the changing demands of this role, we’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. The learning path now aligns with the improved admin console. We’ve replaced the readings with videos for a better learning experience: in total, we added 17 new videos across 5 courses to match new features and functionality. Earn the Professional Google Workspace Administrator Certification to distinguish yourself among your peers and showcase your skills.Related ArticleUnlock collaboration with Google Workspace EssentialsIntroducing Google Workspace Essentials Starter, a no-cost offering to bring modern collaboration to work.Read Article

  • How one learner earned four AWS Certifications in four months
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 28, 2022 at 4:16 pm

    Ever wonder what it takes to earn an AWS Certification? Imagine earning four in four months. Rola Dali, a senior software developer at Local Logic, shares her experience and insights about challenging herself to do just that. She breaks down the resources she found most helpful and her overall motivation to invest in her cloud learning journey . . .

  • Build your cloud skills with no-cost access to Google Cloud training on Coursera
    by (Training & Certifications) on April 28, 2022 at 4:00 pm

    Attracting talented individuals with cloud skills is critical to success, as organizations continue to adopt and optimize cloud technology. The lack of cloud expertise and experience is a top and growing challenge for businesses as they expand their cloud footprint and search for skilled talent. To help meet this need, we are now offering access to over 500 Google Cloud self-paced labs made available on Coursera. A selected collection of the most popular self-paced labs, known as projects, are available at no cost for one month from April 28 - May 29, 2022. Learners can choose their preferred format to claim one month free access to either a top Google Cloud Project, course, Specialization or Professional Certificate.What is a lab?A lab is a learning experience where you complete a scenario based use case by following a set of instructions in a specified amount of time in an interactive hands-on environment. Labs are completed in the real Google Cloud Console and other Google Cloud products using temporary credentials, as opposed to a simulation or demo environment and take 30 - 90 minutes to complete (depending on difficulty level). Our goal is to enable you to apply your new skills and be effective immediately in real-world cloud technology settings.Many of these labs, known in Coursera as projects, include a variety of tasks and activities for you to choose from to best fit your needs. Combine bite-size individual labs to create a personalized set of learning and upskilling with clear application in a sandbox environment. Labs are available for all skill levels, and cover a wide range of topics:Cloud essentialsCloud engineering and architectureMachine learningData analytics and engineeringDevOpsHere is a roundup of some popular and trending labs right now:Getting Started with Cloud Shell and gcloudKubernetes Engine: Qwik StartIntroduction to SQL for BigQuery and Cloud SQLMigrating a Monolithic Website to Microservices on Google Kubernetes EngineGet a feel for the lab experienceCreating a Virtual Machine is one of our most popular labs, taking place directly in Google Cloud Console. In this beginner level project, you will learn how to create a Google Compute Engine virtual machine and understand zones, regions and machine types. It takes 40 minutes to complete and you’ll earn a shareable certificate.As an example of more advanced content, Predict Baby Weight with TensorFlow on AI Platformrequires experience to train, evaluate and deploy a machine learning model to predict a baby’s weight. The lab activities are completed in a real cloud environment, not in a simulation or demo environment. It takes 90 minutes to complete and you will earn a shareable certificate.Kick off your no-cost learning journey todayFor direct access to self-paced labs, we recommend that you get started by taking a look at Coursera’s Collection Page, where you can browse labs/projects by our most popular topics, or explore the full catalog to find the cloud projects that are right for your career goals by browsing Google Cloud ‘projects’ on Coursera.The month of free Google Cloud learning on Coursera is available from April 28 - May 29, 2022, so join us to evolve your skill set and cloud knowledge.Ready to start your learning Google Cloud at no-cost for 30 days? Sign uphere.Related ArticleTraining more than 40 million new people on Google Cloud skillsTo help more than 40 million people build cloud skills, Google Cloud is offering limited time no-cost access to all training contentRead Article

  • Learn to build batch analytics solutions with new AWS classroom course
    by Kumar Kumaraguruparan (AWS Training and Certification Blog) on April 27, 2022 at 4:00 pm

    Learn more about our new AWS intermediate-level course, Building Batch Data Analytics Solutions on AWS. If you are a data engineer or data architect who builds data analytics pipelines with open-source analytics frameworks, such as Apache Hadoop or Apache Spark, this one-day, virtual classroom course will help you develop these skills. You’ll learn to build a modern data architecture using Amazon EMR, an enterprise-grade Apache Spark, and Apache Hadoop managed service . . .

  • New courses and updates from AWS Training and Certification in April
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 26, 2022 at 4:20 pm

    New Amazon Web Services (AWS) Training and Certification courses and offerings for cloud learners, AWS customers, and AWS Partners for April 2022. New digital offerings include fundamental and intermediate courses that focus on SAP, managing game workloads, designing blockchain solutions, Amazon Connect, AWS storage and databases, and evaluating migration scenarios. And if you’re interested in building a batch data analytics solution on AWS, there’s a new intermediate-level classroom course . . .

  • 3 tier application gcp terraform code
    by /u/savetheQ (Google Cloud Platform Certification) on April 25, 2022 at 7:48 pm

    Hi folks, anyone has some sample git for 3 tier application gcp terraform code. submitted by /u/savetheQ [link] [comments]

  • Professional Cloud Architect - materials recommendations needed.
    by /u/theGrEaTmPm (Google Cloud Platform Certification) on April 24, 2022 at 10:56 am

    Hi, What materials did you use when preparing for Professional Cloud Architect? Do you have any proven materials? How much time did you spend getting ready for the exam? Thanks in advance for your help. submitted by /u/theGrEaTmPm [link] [comments]

  • Bouncing back: shifting from hospitality to cloud
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 22, 2022 at 4:31 pm

    Hear directly from AWS re/Start graduate, Antonio O'Donnell, about his experience with the reskilling program. AWS re/Start is a full-time, classroom-based skills training program that prepares professionals for cloud-based careers.

  • How to prepare for — and ace — Google’s Associate Cloud Engineer exam
    by (Training & Certifications) on April 22, 2022 at 4:00 pm

    Do you want to get out of the server room and into the cloud? Now’s the time to sign up for our Cloud Engineer Learning Path — now with the newly refreshed Preparing for the Associate Cloud Engineer certification course — and start working toward your Associate Cloud Engineer certification. Earning your Associate Cloud Engineer certification sends a strong signal to potential employers about what you can accomplish in Google Cloud. Associate Cloud Engineers can deploy and secure applications and infrastructure, maintain enterprise solutions to ensure they meet performance metrics, and monitor the operations of multiple projects in the cloud. Associate Cloud Engineers have also demonstrated that they can use the Google Cloud Console and the command-line interface to maintain and scale deployed cloud solutions that leverage Google-managed or self-managed services on Google Cloud.Many Associate Cloud Engineers come from the on-premises world of racking and stacking servers and are ready to upgrade their skills to the cloud era. Achieving an Associate Cloud Engineer certification is a great step towards growing a career in IT, opening you up to become a cloud developer or architect, cloud security engineer, cloud systems engineer, or network engineer, among others.The Associate Cloud Engineer learning pathBefore attempting the Associate Cloud Engineer exam, we recommend that you have 6+ months hands-on experience with Google Cloud products and solutions. While you’re gaining that experience, a good way to enhance your preparation is to follow the Cloud Engineer Learning Path, which consists of on-demand courses, hands-on labs, and the opportunity to earn skill badges. Here are our recommended steps:1. Understand what’s on the exam: Review the exam guide to determine if your skills align with the topics on the exam.2. Create your study plan with the Preparing for Your Associate Cloud Engineer Journey: This course helps you structure your preparation for the Associate Cloud Engineer exam. You will learn about the Google Cloud domains covered by the exam and how to create a study plan to improve your domain knowledge.3. Start preparing:  Follow the Cloud Engineer learning path, where you’ll dive into Google Cloud services such as Compute Engine, Google Kubernetes Engine, App Engine, Cloud Storage, Cloud SQL, and BigQuery. 4. Earn skills badges: Demonstrate your growing Google Cloud skills by sharing your earned skill badges along the way. Skill badges that will help you prepare for the Associate Cloud Engineer certification include:Perform Foundational Infrastructure Tasks in Google CloudAutomating Infrastructure on Google Cloud with TerraformCreate and Manage Cloud ResourcesSet Up and Configure a Cloud Environment in Google Cloud5. Review additional resources: Test your knowledge with some sample exam questions here.6. Certify: Finally, register for the exam and select whether to take it remotely or at a nearby testing center. Start your prep to become an Associate Cloud Engineer Take the next step towards becoming a cloud engineer and develop the recommended hands-on experience by earning the recommended skill badges. Register here and get 30 days free access to the cloud engineer learning path on  Google Cloud Skills Boost!Related ArticleThis year, resolve to become a certified Professional Cloud Developer – here’s howFollow this Google Cloud Skills Boost learning path to help you earn your Google Cloud Professional Developer certification.Read Article

  • New to GCP and looking for a study group!
    by /u/sulliv16 (Google Cloud Platform Certification) on April 19, 2022 at 4:15 pm

    As the title states, I am starting my venture into GCP and would love to get connected with a few people to help with accountability and share insight as we learn! I have around 3 years working with AWS and have my solutions architect professional and security specialty very there. I know next to nothing about GCP, but am very familiar with cloud concepts and it has been my work focus the past 2 years. Let me know if you would interested to link up and start learning together! Thanks all submitted by /u/sulliv16 [link] [comments]

  • GCP Professional Cloud Architect Certification Blog.
    by /u/HamanSharma (Google Cloud Platform Certification) on April 17, 2022 at 12:24 am

    Check out the preparation guide for GCP Cloud Architect Certification with tips and resources - https://blog.reviewnprep.com/gcp-cloud-architect. Hope this helps everyone preparing for this certification. submitted by /u/HamanSharma [link] [comments]

  • Introducing the Professional Cloud Database Engineer certification
    by (Training & Certifications) on April 12, 2022 at 3:00 pm

    Today, we’re pleased to announce the new Professional Cloud Database Engineer certification, in beta, to help database engineers translate business and technical requirements into scalable and cost-effective database solutions. By participating in the beta, you will directly influence and enhance the learning and career path for other Cloud Database Engineers. And upon passing the exam, you will become one of the first Google Cloud Certified Cloud Database Engineers in the industry. The cloud database space is evolving rapidly with the worldwide cloud database market projected to reach $68.5 billion by 2026. As more databases move to fully managed cloud database services, the traditional database engineer is now being tasked to handle more nuanced and advanced functions. In fact, there is a massive need for database engineers to lead strategic decision-making and distinguish themselves with a more developed and advanced skill set than what the industry previously called for. Why the certification is importantCloud Database Engineers are critical to the success of your organization and that’s why this new certification from Google Cloud is so important. These engineers are uniquely skilled at designing, planning, testing, implementing, and monitoring databases including migration processes. Additionally, they provide the right guidance about which databases are best for a company’s specific use cases and they’re able to guide developers when making decisions about which databases to use when building applications. These engineers lead migration efforts while ensuring customers are getting the most out of their database investment.  This new certification will validate a developer’s ability to: Design scalable cloud database solutionsManage a solution that can span multiple databasesPlan and execute on database migrationsDeploy highly scalable databases in Google CloudBefore your exam, be sure to check out the exam guide to familiarize yourself with the topics covered, and round out your skills by following the Database Engineer Learning Path which includes online training, in-person classes, hands-on labs, and additional resources to help you prepare for your exam. I am excited to welcome you to the program. Sign up now and save 40% on the cost of the certification.Related ArticleGoogle Cloud’s key investment areas to accelerate your database transformationThis blog focuses on the 6 key database investment areas that help you accelerate your digital transformation journey.Read Article

  • Now accepting applications for the AWS AI & ML Scholarship program
    by Anastacia Padilla (AWS Training and Certification Blog) on April 11, 2022 at 5:09 pm

    Calling all high school and college students who are at least 16 years old and underserved or underrepresented in tech globally – we invite you to apply for the Amazon Web Services (AWS) Artificial Intelligence (AI) & Machine Learning (ML) Scholarship Program. The AWS AI & ML Scholarship Program, in collaboration with Intel and Udacity, will launch this summer seeking to inspire, motivate, and educate students about AI and ML to nurture a diverse workforce of the future . . .

  • Using a scientific thought process to improve customer experiences
    by Marwan Al Shawi (AWS Training and Certification Blog) on April 8, 2022 at 4:01 pm

    Learn approaches and tips to better understand the needs of the end customer by asking the right questions. This blog shares two techniques of critical thinking to help you break down a complex scenario into smaller parts, allowing you to better analyze the situation and take appropriate action . . .

  • Announcing new certification: AWS Certified: SAP on AWS – Specialty
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 7, 2022 at 8:41 pm

    We're introducing our latest AWS Certification, AWS Certified: SAP on AWS - Specialty certification. Showcase your expertise in designing, implementing, migrating, and operating SAP workloads on AWS. Register today!

  • Train your organization on Google Cloud Skills Boost
    by (Training & Certifications) on April 7, 2022 at 1:00 pm

    Enterprises are moving to cloud computing at an accelerated pace, estimating that 85% of enterprises will adopt a cloud first principle by 2025 (Gartner®, Gartner says Cloud will be the Centerpiece of the New Digital Experience, Laurence Goasduff, November 10, 2021). There are countless reasons why enterprises are moving to the cloud - from reduced IT costs and increased scalability, to improved security and efficiency. However this rapid change has presented a challenge - how will organizations build the skills they need to accelerate cloud adoption within their organization? The answer is comprehensive training. We commissioned IDC in March 2022 , an independent market intelligence firm, to write a white paper that studied the impact of comprehensive training and certification on cloud adoption. When organizations are trained they see:Significantly greater improvement in top business priorities - 133% greater improvement on employee retention and 56% greater improvement in customer experience scoresAccelerated cloud adoption, reduced time to value, and greater ROI - trained organizations are 10X more likely to implement cloud in 2 yearsGreater performance improvements - in areas like leveraging data analytics, protecting data, and jumpstarting innovationIDC White Paper, sponsored by Google Cloud Learning: "To Maximize Your Cloud Benefits, Maximize Training" - Doc #US48867222, March 2022To learn more, download the white paper.Build Team Skills in Google Cloud Skills Boost Coupling the research above with our commitment to equip more than 40 million people with cloud skills, we are excited to provide business organizations with a comprehensive platform to help address their teams’ cloud skilling needs. Google Cloud Skills Boost combines award winning learning experiences with the ability to earn credentials to validate learning, which can be managed and delivered directly by Google Cloud with enterprise level features. These features allow Organization leaders to manage access and user permissions for their team, and drive effective business outcomes using learning analytics. In addition, administrators will be able to grant access to the Google Cloud content catalog to individuals on their team. This catalog includes hundreds of courses, labs, and credentials authored by Google Cloud experts to help their teams learn and validate their cloud skills.Organizations can trial these features today through an exclusive no cost trial (based on eligibility). Contact your account team to learn more about your eligibility for the trial and how to set up your organization on Google Cloud Skills Boost. New to Google Cloud? Visit ourteam training page and complete the learning assessment to understand your team’s training needs and get connected with an account team. Ready to get started?Google Cloud Learning is committed to helping you accelerate the rate of cloud adoption in your organization through enabling team training. Contact your account team to learn more about your eligibility for the no cost trial and how to set up your organization on Google Cloud Skills Boost.  New to Google Cloud? Visit ourteam training page and complete the learning assessment to understand your team’s training needs and get connected with an account team. Click here to learn more about how comprehensive training impacts cloud adoption.GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.Related ArticleWomen Techmakers journey to Google Cloud certificationGoogle Cloud is creating more opportunities in the credentialing space with a certification journey for Ambassadors of the Women Techmake...Read Article

  • Looking for Good Practice Exams
    by /u/zeeplereddit (Google Cloud Platform Certification) on April 3, 2022 at 10:15 pm

    I have done some googling on practice exams for the Google Cloud Digital Leader exam and I have only come across the Udemy offering. I have done Udemy courses before but I have no idea what their practice exams are like. Is there anyone here with any advice or suggestions in this regard? submitted by /u/zeeplereddit [link] [comments]

  • General availability: Azure Database for PostgreSQL - Hyperscale (Citus) now FedRAMP High compliant
    by Azure service updates on March 30, 2022 at 4:01 pm

    Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure is now compliant with FedRAMP High.

  • Best Podcasts for Cert Seekers?
    by /u/zeeplereddit (Google Cloud Platform Certification) on March 24, 2022 at 10:07 pm

    Hi folks, I am greatly looking forward to embarking on my new adventure of getting several Google Certs. To that end, I am wondering what are the best podcasts to listen to during my commute back and forth from work? The types of podcasts I am hopeful of include those that discuss the exams, goes over sample questions in high detail, interviews people who have taken the test, and also, any podcasts that discuss the concepts that I will be wrapping my head around while I go after the certs. Thanks in advance! submitted by /u/zeeplereddit [link] [comments]

  • Accelerating Government Compliance with Google Cloud’s Professional Service Organization
    by (Training & Certifications) on March 21, 2022 at 5:00 pm

    Did you know that by 2025, enterprise IT spending on public cloud computing will overtake traditional IT spending? In fact, 51% of IT spend in application software, infrastructure software, business process services, and system infrastructure will transition to the public cloud, compared to 41% in 20221.. As enterprises continue to rapidly shift to the cloud, government agencies must prioritize and accelerate security and compliance implementation. In May 2021, the White House issued an Executive Order requiring US Federal agencies to accelerate cloud adoption, embrace security best practices, develop plans to implement Zero Trust architectures, and map implementation frameworks to FedRAMP. The Administration’s focus on secure cloud adoption marks a critical shift to prioritizing cybersecurity at scale. Google Cloud’s Public Sector Professional Services Organization (PSO) has committed to helping customers meet security and compliance requirements in the cloud through specialized consulting engagements. Accelerating Authority to Operate (ATO)The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011 as a government-wide program that promotes the adoption of secure cloud services across the federal government. FedRAMP provides a standardized approach to security and risk assessment for cloud technologies and federal agencies. US Federal agencies are required to utilize and implement FedRAMP cloud service offerings as part of the “Cloud First” federal cloud computing strategy.While Google Cloud provides a FedRAMP-authorized cloud services platform and a robust catalog of FedRAMP-approved products and services (92 services and counting), customers are still tasked with achieving Agency ATO for the products and services they use, and Google Cloud provides many resources to assist customers with this journey. Google Cloud’s FedRAMP package can be accessed by completing the FedRAMP Package Access Request Form and submitting it to info@fedramp.gov. Additionally, customers can use Google’s NIST 800-53 ATO Accelerator as a starting point for documenting control implementation. Finally, Google Cloud’s Public Sector PSO offers the following strategic consulting engagements to help customers streamline the Agency ATO process.Cloud Discover: FedRAMP is a six-week interactive workshop to support customers that are just getting started with the ATO process on Google Cloud. Customers are educated on FedRAMP fundamentals, Google’s security and compliance posture, and how to approach ATO on Google Cloud. Through deep-dive interviews and design sessions, PSO helps customers craft an actionable ATO plan, assess FedRAMP readiness, and develop a conceptual ATO boundary. This engagement helps organizations establish a clear understanding and roadmap for FedRAMP ATO on Google Cloud.FedRAMP Security Review is a ten to twelve week engagement that aids customers in FedRAMP operational readiness. PSO consultants perform detailed FedRAMP architecture reviews to identify potential gaps in NIST 800-53 security control implementation and Google Cloud secure architecture best practices. Findings from the security reviews are shared with the customer along with configuration guidance and recommendations. This engagement helps organizations prepare for the third-party or independent security assessment that is required for FedRAMP ATO.Cloud Deploy: FedRAMP is a multi-month engagement designed to help customers document the details of their FedRAMP System Security Plan (SSP) and corresponding NIST 800-53 security controls, in preparation for Agency ATO on Google Cloud at FedRAMP Low, Moderate, or High. PSO collaborates with customers to develop a detailed technical infrastructure design document and security control matrix capturing evidence of the FedRAMP system architecture, security control implementation, data flows and system components. PSO can also partner with a third-party assessment organization (3PAO) or an independent assessor (IA) to support customer efforts for FedRAMP security assessment. This engagement helps customer system owners prepare for Agency ATO assessment and package submission.Developing a Zero Trust StrategyIn addition to providing FedRAMP enablement, Public Sector PSO has partnered with the Google Cloud Chief Information Security Officer (CISO) team to assist organizations with developing a zero trust architecture and strategy.Zero Trust Foundations is a seven-week engagement co-delivered by Google Cloud’s CISO and PSO teams. CISO and PSO educate customers on zero trust fundamentals, Google’s journey to zero trust through BeyondCorp, and defense in depth best practices. The CISO team walks customers through a Zero Trust Assessment (ZTA) to understand the organization’s current security posture and maturity. Insights from the ZTA enable the CISO team to work with the customer to identify an ideal first-mover workload for zero trust adoption. Following the CISO ZTA, PSO facilitates a deep-dive Zero Trust Workshop (ZTW), collaborating with key customer stakeholders to develop a NIST 800-207 aligned, cloud-agnostic zero trust architecture for the identified first-mover workload. The zero trust architecture is part of a comprehensive zero trust strategy deliverable that is based on focus areas called out in the Office of Management and Budget (OMB) Federal Zero Trust Strategy released January 2022. Scaling Secure Cloud Adoption with PSOPublic Sector PSO enables customer success by sharing our technical expertise, providing cloud strategy, implementation guidance, training and enablement using our proven methodology. As enterprise IT, operations, and organizational models continue to evolve, our goal is to help government agencies accelerate their security and compliance journeys in the cloud.  To learn more about the work we are doing with the federal government, visit cloud.google.com/solutions/federal-government. 1 Gartner Says More Than Half of Enterprise IT Spending in Key Market Segments Will Shift to the Cloud by 2025

  • GCP - PCNE (Thoughts on ACG/A cloud guru) training material
    by /u/friday963 (Google Cloud Platform Certification) on March 20, 2022 at 1:21 am

    Has anyone here done the PCNE exam and used A cloud guru as their primary study resource? If so what is your thoughts on the quality of the study material, is it enough to pass the cert or was much more external resources needed? So far I've done qwiklabs and acg for the PCNE exam, I think qwiklabs has a better lab environment but acg has a better video series. Either way I've not taken the exam but have scheduled it for later this month and am trying to gauge the level of difficulty. submitted by /u/friday963 [link] [comments]

  • exam of GCP Professional Cloud Architect
    by /u/meokey (Google Cloud Platform Certification) on March 11, 2022 at 9:43 pm

    I'm working on the courses of PCA and wondering what the exam would be like ... is there hands-on lab test in the exam? Do I have to remember all these command line tools and their arguments to pass the exam? Thanks. submitted by /u/meokey [link] [comments]

  • Which video course?
    by /u/Bollox427 (Google Cloud Platform Certification) on March 8, 2022 at 8:40 pm

    I would like to learn the fundamentals of GCP and then move on to Security and ML. I know Coursera do courses but is there anyone else of note? How do other course suppliers compare to Coursera? Is Coursera seen as an official education partner for the Google Cloud? submitted by /u/Bollox427 [link] [comments]

  • Women Techmakers journey to Google Cloud certification
    by (Training & Certifications) on March 8, 2022 at 5:00 pm

    In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Here at Google, we’re excited to celebrate women from all backgrounds and are committed to increasing the number of women in the technology industry. Google’s Women Techmakers community provides visibility, community, and resources for women in technology to drive participation and innovation in the field. This is achieved by hosting events, launching resources, and piloting new initiatives with communities and partners globally. By joining Women Techmakers, you'll receive regular emails with access to resources, tools and opportunities from Google and Women Techmakers partnerships to support you in your career.Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Participants will have the opportunity to take part in a free-of-charge, 6-week cohort learning journey, including: weekly 90-minute exam guide review sessions led by a technical mentor, peer-to-peer support in the form of an Online Community, and 12 months access to Google Cloud's on-demand learning platform, Google Cloud Skills Boost. Upon completion of the coursework required in the learning journey, participants will receive a voucher for the Associate Cloud Engineer certification exam. This program, and other similar offerings such as Cloud Career Jumpstart, and the learning journey for members transitioning out of the military, are just a few examples of the investment Google Cloud is making into the future of the technology workforce. Are you interested in staying in the loop with future opportunities with Google Cloud? Join our community here.Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article

  • Study path for GCP Professional Cloud Architect
    by /u/Prime367 (Google Cloud Platform Certification) on March 7, 2022 at 4:50 pm

    Hi Folks, Thanks for your time. I have been working as AWS Architect for 4-5 years, have several AWS certifications, including the Solution architect professional. I am supporting a GCP implementation for the past year or so, and want to go for GCP Cloud Architect certification now. Need some help with Which courses are best for the GCP Cloud Architect exam? Which practice tests do we need to do. I know it's difficult to clear certifications without doing any practice tests. Thanks in advance. submitted by /u/Prime367 [link] [comments]

  • which certification should i do?
    by /u/ParticularFactor353 (Google Cloud Platform Certification) on March 7, 2022 at 4:34 pm

    background: i am a fresher just joined a company and got the ETL domain ,and working on Bigquery scripts and composer, dataflow from past 6 months now i want to do some gcp certification so where should i begin? submitted by /u/ParticularFactor353 [link] [comments]

  • AWS & Azure Certified, how to start on GCP ACE? (Advice requested)
    by /u/skelldog (Google Cloud Platform Certification) on March 6, 2022 at 5:34 am

    Sorry, I know some of this has been discussed, but as things change regulary, I would appreciate any suggestions people are willing to share. I currently hold the three Associate certs from AWS and Azure Administrator Associate. I have been in IT for longer than I care to admit. I was thinking of bypassing Cloud Digital Leader and going directly to ACE? Between work and other options, I have access to most of the popular training programs (ITPro, AcloudGuru, Lynda, Qwiklabs, Acloudguru,Whizlabs, Udemy) I see the most recommendations for the Udemy course by Dan Sullivan, is this my best choice? My time is always limited, and I would like to pick the course that gives the most bang for the buck (Or time in this case) I already purchased the tutorials Dojo self-test last time they had a sale (Jon Bonso does some great work!) I would appreciate any other suggestions anyone is willing to offer. Thanks for reading this! submitted by /u/skelldog [link] [comments]

  • Digital Cloud Leader exam vouchers
    by /u/pillairohit (Google Cloud Platform Certification) on March 3, 2022 at 5:39 pm

    Hi all. Does GCP have online webinars/trainings that gives attendees exam vouchers? Similar to Microsoft Azure online webinars for AZ900? I'm asking for the Digital Cloud Leader certification exam. Thank you for your help and time. submitted by /u/pillairohit [link] [comments]

  • General availability: Asset certification in Azure Purview data catalog
    by Azure service updates on February 28, 2022 at 5:00 pm

    Data stewards can now certify assets that meet their organization's quality standards in the Azure Purview data catalog

  • GCP Associate Cloud Engineer Study Guide
    by /u/ravikirans (Google Cloud Platform Certification) on February 21, 2022 at 12:08 pm

    https://ravikirans.com/gcp-associate-cloud-engineer-exam-study-guide/ To view all the other GCP study Guides, check here https://ravikirans.com/category/gcp/ submitted by /u/ravikirans [link] [comments]

  • Sentinel Installation
    by /u/ribcap (Google Cloud Platform Certification) on February 20, 2022 at 7:30 pm

    Hey Everyone! So I'm in the process of scheduling an exam and have created my biometric profile but can't seem to install Sentinel. Anyone else have this issue? I've tried Chrome, Firefox, and even Safari. I click on the install link and literally nothing happens....nothing downloaded or anything. Any ideas? ​ Edit: I have not actually scheduled the exam...just trying to get everything else in place first. Should I schedule the exam prior to installing Sentinel? ​ Rib submitted by /u/ribcap [link] [comments]

  • Gcp exam fee reimbursement
    by /u/Aamirmir111 (Google Cloud Platform Certification) on February 17, 2022 at 2:15 pm

    If one clears a gcp certification exam.. is there any policy for fee reimbursement?? submitted by /u/Aamirmir111 [link] [comments]

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 16, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Google Cloud Fundamentals Full Course For Beginners Only 2022 | GCP Certified
    by /u/ClayDesk (Google Cloud Platform Certification) on February 14, 2022 at 12:30 pm

    submitted by /u/ClayDesk [link] [comments]

  • Google Cloud Platform Service Comparison
    by /u/lervz_ (Google Cloud Platform Certification) on February 12, 2022 at 3:35 pm

    To anyone who has AWS/Azure background and is new to Google Cloud Platform, you will find this service comparison made by Google very helpful. AWS, Azure, GCP Service Comparison And for those who are preparing for the Google Associate Cloud Engineer Certification exam, check these resources from Tutorials Dojo. Google Certified Associate Cloud Engineer Practice Exams Google Certified Associate Cloud Engineer Study Guide Google Cloud Platform Cheat Sheets submitted by /u/lervz_ [link] [comments]

  • Unified data and ML: 5 ways to use BigQuery and Vertex AI together
    by (Training & Certifications) on February 9, 2022 at 4:00 pm

    Are you storing your data in BigQuery and interested in using that data to train and deploy models? Or maybe you’re already building ML workflows in Vertex AI, but looking to do more complex analysis of your model’s predictions? In this post, we’ll show you five integrations between Vertex AI and BigQuery, so you can store and ingest your data; build, train and deploy your ML models; and manage models at scale with built-in MLOps, all within one platform. Let’s get started!April 2022 update: You can now register and manage BigQuery ML models with Vertex AI Model Registry, a central repository to manage and govern the lifecycle of your ML models. This enables you to easily deploy your BigQuery ML models to Vertex AI for real time predictions. Learn more in this video about “ML Ops in BigQuery using Vertex AI.”Import BigQuery data into Vertex AIIf you’re using Google Cloud, chances are you have some data stored in BigQuery. When you’re ready to use this data to train a machine learning model, you can upload your BigQuery data directly into Vertex AI with a few steps in the console:You can also do this with the Vertex AI SDK:code_block[StructValue([(u'code', u'from google.cloud import aiplatform\r\n\r\ndataset = aiplatform.TabularDataset.create(\r\n display_name="my-tabular-dataset",\r\n bq_source="bq://project.dataset.table_name",\r\n)'), (u'language', u'')])]Notice that you didn’t need to export our BigQuery data and re-import it into Vertex AI. Thanks to this integration, you can seamlessly connect your BigQuery data to Vertex AI without moving your data from the cloud.Access BigQuery public datasets This dataset integration between Vertex AI and BigQuery means that in addition to connecting your company’s own BigQuery datasets to Vertex AI, you can also utilize the 200+ publicly available datasets in BigQuery to train your own ML models. BigQuery’s public datasets cover a range of topics, including geographic, census, weather, sports, programming, healthcare, news, and more. You can use this data on its own to experiment with training models in Vertex AI, or to augment your existing data. For example, maybe you’re building a demand forecasting model and find that weather impacts demand for your product; you can join BigQuery’s public weather dataset with your organization’s sales data to train your forecasting model in Vertex AI.Below, you’ll see an example of importing the public weather data from last year to train a weather forecasting model:Accessing BigQuery data from Vertex AI Workbench notebooksData scientists often work in a notebook environment to do exploratory data analysis, create visualizations, and perform feature engineering. Within a managed Workbench notebook instance in Vertex AI, you can directly access your BigQuery data with a SQL query, or download it as a Pandas Dataframe for analysis in Python.Below, you’ll see how you can run a SQL query on a public London bikeshare dataset, then download the results of that query as a Pandas Dataframe to use in my notebook:Analyze test prediction data in BigQueryThat covers how to use BigQuery data for training models in Vertex AI. Next, we’ll look at integrations between Vertex AI and BigQuery for exporting model predictions. When you train a model in Vertex AI using AutoML, Vertex AI will split your data into training, test, and validation sets, and evaluate how your model performs on the test data. You also have the option to export your model’s test predictions to BigQuery so you can analyze them in more detail:Then, when training completes, you can examine your test data and run queries on test predictions. This can help determine areas where your model didn’t perform as well, so you can take steps to improve your data next time you train your model.Export Vertex AI batch prediction resultsWhen you have a trained model that you’re ready to use in production, there are a few options for getting predictions on that model with Vertex AI:Deploy your model to an endpoint for online predictionExport your model assets for on-device predictionRun a batch prediction job on your modelFor cases in which you have a large number of examples you’d like to send to your model for prediction, and in which latency is less of a concern, batch prediction is a great choice. When creating a batch prediction in Vertex AI, you can specify a BigQuery table as the source and destination for your prediction job: this means you’ll have one BigQuery table with the input data you want to get predictions on, and Vertex AI will write the results of your predictions to a separate BigQuery table.With these integrations, you can access BigQuery data, and build and train models. From there Vertex AI helps you:Take these models into production Automate the repeatability of your model with managed pipelines Manage your models performance and reliability over timeTrack lineage and artifacts of your models for easy-to-manage governance Apply explainability to evaluate feature attributions What’s Next?Ready to start using your BigQuery data for model training and prediction in Vertex AI? Check out these resources:Codelab: Training an AutoML model in Vertex AICodelab: Intro to Vertex AI WorkbenchDocumentation: Vertex AI batch predictionsVideo Series: AI Simplified: Vertex AIGitHub: Example NotebooksTraining: Vertex AI: Qwik StartAre there other BigQuery and Vertex AI integrations you’d like to see? Let Sara know on Twitter at @SRobTweets.Related ArticleWhat is Vertex AI? Developer advocates share moreDeveloper Advocates Priyanka Vergadia and Sara Robinson explain how Vertex AI supports your entire ML workflow—from data management all t...Read Article

  • Curso, videos o link para sacar la gcp cloud engineer associate
    by /u/ahelord (Google Cloud Platform Certification) on February 5, 2022 at 3:26 am

    Hola quisiera preguntar cuál es el mejor curso, videos o página para aprender gcp y pasar la certificación de associate submitted by /u/ahelord [link] [comments]

  • Access role-based Google Cloud training free of charge
    by (Training & Certifications) on February 3, 2022 at 5:00 pm

    Google Cloud is now offering 30 days no-cost access to Google Cloud Skills Boost, the definitive destination for skills development, to complete role-based training. Choose from the following eight learning paths, which include interactive labs and opportunities to earn skill badges to demonstrate your cloud knowledge: Getting Started with Google Cloud, Cloud Architect, Cloud Engineer, Data Analyst, Data Engineer, DevOps Engineer, Machine Learning Engineer and Cloud Developer learning path. Read below to find out more about each learning path. Getting Started with Google CloudIn this path, you’ll learn about Google Cloud fundamentals such as core infrastructure, big data and machine learning (ML). You’ll also find out how to write gcloud commands, use Cloud Shell, deploy virtual machines, and run containerized applications on Google Kubernetes Engine (GKE).Cloud ArchitectIf you’re looking to learn how to design, develop, and manage cloud solutions, this is the path for you. You’ll learn how to perform infrastructure tasks like using Cloud Monitoring, Cloud Identity and Access Management (Cloud IAM), and more. The path will end with how to architect with Google Compute Engine and GKE. For a guided walkthrough of how to get started with Cloud IAM and Monitoring, register here to join me on February 10. You’ll also have a chance to get your questions answered live by Google Cloud experts via chat. Cloud EngineerTo learn how to plan, configure, set up, and deploy cloud solutions, take this learning path. You’ll learn how to get started with Google Compute Engine, Terraform in a cloud environment, GKE, and more. Data AnalystThis learning path will teach you how to gather and analyze data to identify trends and develop valuable insights to help solve problems. You’ll be introduced to BigQuery, Looker, LookML, BigQuery ML, and Data Catalog. Data EngineerInterested in designing and building systems that collect the data used for business decisions? Select this path. You’ll learn how to modernize data lakes and data warehouses with Google Cloud. Afterwards, you will also discover how to use Dataflow for serverless data processing and more. DevOps EngineerA DevOps Engineer is responsible for defining and implementing best practices for efficient and reliable software delivery and infrastructure management. This learning path will show you how to build an SRE culture, use Google Cloud Operations Suite for DevOps, and more. Machine Learning EngineerChoose this path for courses and labs on how to design, build, productionize, optimize, operate, and maintain ML systems. You’ll discover how to use TensorFlow, MLOps tools, VertexAI, and more. Cloud DeveloperA Cloud Developer designs, builds, analyzes, and maintains cloud-native applications. This path will teach you how to use Cloud Run and Firebase for serverless app development. You’ll also learn how to deploy to Kubernetes in Google Cloud. To learn more about the basics of Google Cloud infrastructure before getting started with a learning path, register here. Ready for your role-based training? Sign up here.Related Article2022 Resolution: Learn Google Cloud, free of chargeTechnical practitioners and developers can start 2022 with free introductory training on how to use Google Cloud.Read Article

  • General availability: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 2, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Does anyone have gcp exam vouchers? Or anyone knows where can we get it from?
    by /u/Aamirmir111 (Google Cloud Platform Certification) on February 1, 2022 at 11:36 am

    submitted by /u/Aamirmir111 [link] [comments]

  • Let’s have a chat about using dumps
    by /u/whooyeah (Microsoft Azure Certifications) on January 31, 2022 at 9:49 pm

    This keeps coming up recently so it’s important we have a sticky chat about it that everyone can see. Dumps are essentially cheating. They go against what the exams were designed to do in teaching you azure skills. For this reason they are also against the terms of service from Microsoft for taking the exam. It’s annoying as a professional because you will be in a job interview and hear the hiring manager say things like “MCP exams are worthless because everyone just uses dumps”. Which is heart breaking when you have spent so much time studying the subject knowledge and validating your skills with the exam. As a hiring manager it is annoying because I’ve interviewed candidates in the past with an MCSD and it was clear they had no usable information because they cheated with dumps. You will notice in the side bar rule 1. Breaking this will result in a ban. submitted by /u/whooyeah [link] [comments]

  • This year, resolve to become a certified Professional Cloud Developer – here’s how
    by (Training & Certifications) on January 28, 2022 at 5:00 pm

    Do you have a New Year’s resolution to improve your career prospects? Sign up here for 30 days no-cost access to Google Cloud Skills Boost to help you on your way to becoming a certified Professional Cloud Developer. According to third-party IT training firm Global Knowledge, two Google Cloud Certified Professional certifications topped its list of the highest-paid IT certifications in 2021. Once you register, you’ll have an opportunity to take the Cloud Developer learning path, which consists of on-demand labs and courses, coveringGoogle Cloud infrastructure fundamentals, application development in the cloud, security, monitoring and troubleshooting, Kubernetes, Cloud Run, Firebase and more. Along the way, you’ll have an opportunity to earn skill badges to demonstrate your cloud knowledge and access resources to help you prepare for the Professional Cloud Developer certification.Click to enlargeFor example, once you’ve completed the Google Cloud Fundamentals, Core Infrastructure course, in person or on-demand, you can take the Getting Started With Application Development course, where you’ll learn how to design and develop cloud-native applications that integrate managed services from Google Cloud, including Cloud Client Libraries, the Cloud SDK, and Firebase SDKs, an overview of your storage options, and best practices for using Datastore and Cloud Storage.We’re also thrilled to announce that one of the most popular trainings in the Cloud Developer path, Application Development with Cloud Run, is now available on-demand, in addition to via live instruction. This is a great chance to get up to speed on this fully-managed, serverless compute platform at your own pace. Cloud Run marries the goodness of serverless and containers, and is fast becoming one of the most powerful ways to build and run a true cloud-native application. Moving down the proposed learning path, you can show off your Google Cloud chops with skill badges that you can display as part of your Google Developer Profile alongside your membership in the Google Cloud Innovators program, on social media, and on your resumé. There are a wide variety of interesting skills badge for cloud developers like the Serverless Cloud Run Development Quest, or Deploy to Kubernetes in Google Cloud, and many of them take just a couple of hours to complete.With these classes under your belt and Skills Badges on your profile, you’ll be in a good place to start preparing for the Professional Cloud Developer certification exam, using the proposed exam guide and sample questions to show the way. Here’s to earning your certification in 2022, and to a great future!Related Article2022 Resolution: Learn Google Cloud, free of chargeTechnical practitioners and developers can start 2022 with free introductory training on how to use Google Cloud.Read Article

  • Anybody took Network Professional certification after Jan 5, 2022?
    by /u/yasarfa (Google Cloud Platform Certification) on January 20, 2022 at 11:15 pm

    submitted by /u/yasarfa [link] [comments]

  • GCDL Other practice materials/exams?
    by /u/zoochadookdook (Google Cloud Platform Certification) on January 19, 2022 at 11:41 pm

    Hey all - I'm taking my digital leader this Saturday and after watching googles woefully lacking youtube series I 've gone through the exampro course and will be doing his practice exams tomorrow - but I'm wondering if there's any other decent material out there someone could recommend? It seems like the cloud digital leader doesn't have near the exposure or such most of the others do but I'm hoping someone has a source for a prep guide that has helped them. Thanks a ton! submitted by /u/zoochadookdook [link] [comments]

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus): New certifications
    by Azure service updates on January 19, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • GCP ACE last minute tips
    by /u/Pyro1934 (Google Cloud Platform Certification) on January 19, 2022 at 4:27 am

    submitted by /u/Pyro1934 [link] [comments]

  • Proctored exam from a MacBook
    by /u/iuyg88i (Google Cloud Platform Certification) on January 17, 2022 at 4:30 am

    I am taking my GCP Cloud Digital Leader exam in couple of weeks and wanted to know if anyone had any bad experience taking the exam on a Mac. Couple of colleagues said they weren’t able to install the web assessor tool on Mac and had to do it on a windows machine. Sorry no test centre’s where I live and only option is proctored exam submitted by /u/iuyg88i [link] [comments]

  • Technical Training Made Easy and Accessible, the Google Cloud way
    by (Training & Certifications) on January 14, 2022 at 12:40 pm

    Cloud engineers face a constant barrage of new cloud services, products, and innovations. By late 2021, Google Cloud alone had released thousands of new features across hundreds of services. Couple this with other technologies and service releases, and it quickly becomes a herculean task for engineers to navigate, consume, and stay current on the ever changing technology landscape. We have heard from engineers this often leads to anxiety and frustration as engineers struggle to keep up. They are faced with a plethora of training options but often lack the time and funding. Google Cloud has reinvigorated technical training to make it more informative and applicable to public sector customers and partners. We aim to maximize your training experience so you can get targeted training when you need it. The Google Cloud Public Sector Technical Learning Series addresses customer feedback and provides fun and practical training. Sessions are currently running every two weeks. “Short and sweet” technical topics geared to subjects you care aboutGeneric training doesn't always resonate with public sector technologists. Our new curriculum targets specific public sector use cases, is delivered by customer engineers, and can be accomplished in less than two hours.  This means participants can apply the learnings directly to real-life challenges quickly. Easy to find, easy to enroll Training opportunities should always be at your fingertips. Our automated training platform will ensure that you only need to enroll once. The system will automatically notify you of upcoming sessions so you can plan in advance and at your convenience. Sessions will be offered on a recurring basis to meet the needs of your organization.Fun and engagingTypical training sessions often include a sea of glazed eyes, unresponsive to basic prompts, falling asleep at our desks, we have all been there. But it doesn't have to be this way. Our goal is to infuse Google culture into our training through interactive exchanges and tangible rewards to keep participants inspired and engaged.Traditional technology training doesn’t always help you navigate the nuts and bolts of how to effectively introduce a product into an organization. But we know that technology doesn’t operate in isolation; it supports and becomes part of a living organism, managed by humans and confined by other components of an organization’s structure (e.g. existing systems or decentralized business units). Part of a larger community of like-minded engineersLearning with - and from - a community of peers is one way to overcome the challenges and complexities of applying new technology within a complex organization. We created the Public Sector Connect community for this very reason. It is one example of how we surface best practices for public sector innovators. During weekly “Coffee Hours” and working sessions, our community members share their journey and lessons learned with each other. We know that innovation evolves through iteration and diverse perspectives, and Public Sector Connect is committed to helping surface critical challenges and solutions, and connecting those who are solving similar problems. Join the community today.

  • 2022 Resolution: Learn Google Cloud, free of charge
    by (Training & Certifications) on January 12, 2022 at 5:00 pm

    Start your 2022 New Year’s resolutions by learning at no cost how to use Google Cloud with the following training opportunities:30 day access to Google Cloud Skills Boost Register by January 31, 2022 and claim 30 days free access to Google Cloud Skills Boost to complete the Getting Started with Google Cloud learning path. Google Cloud Skills Boost is the definitive destination for skills development where you can personalize learning paths, track progress, and validate your newly-earned expertise with skill badges. The Getting Started with Google Cloud learning path will give you the opportunity to earn three skill badges after you complete hands-on labs and courses designed for aspiring cloud engineers and architects. It covers the fundamentals of Google Cloud including core infrastructure, big data and ML, writing gcloud commands, using Cloud Shell, deploying virtual machines, and running containerized applications on GKE.Cloud OnBoard: half day training on getting started with Google Cloud fundamentalsAttend the Getting Started Cloud OnBoard on January 20 for a comprehensive Google Cloud orientation. Google Cloud experts will show you how to execute your compute, available storage options, how to secure your data, and available Google Cloud managed services. Cloud Study Jam: expert-guided hands-on labGoogle Cloud experts will walk you through a hands-on lab included in Google Cloud Skill Boost’s Getting Started with Google Cloud learning path when you join our Cloud Study Jam on January 27. Google Cloud experts will also answer questions live via chat during this event.Related ArticleBuild your data analytics skills with the latest no cost BigQuery trainingsTo help you make the most of BigQuery, we’re offering no cost, on-demand training opportunitiesRead Article

  • Google Cloud doubles-down on ecosystem in 2022 to meet customer demand
    by (Training & Certifications) on January 11, 2022 at 3:00 pm

    Google Cloud has been a partner-focused business from day one. As we reflect on 2021 and look forward to what’s ahead, I want to say “thank you” to our ecosystem for all of the amazing innovations and services you provided our mutual customers over the last year. In 2021, we faced unprecedented demand from businesses as they turned to the cloud to digitally transform their organizations. This surge in cloud deployments meant we increasingly turned to our ecosystem to help customers create customized implementations with our systems integrators (SIs), build packaged solutions with our independent software vendors (ISVs), or coach employees how to best use new cloud technologies with our consulting and training firms.To continue meeting growing customer demand in 2022 and beyond, I am pleased to share that we are bringing together our ecosystem and channel sales teams into a single partner organization to bring a more streamlined go-to-market approach for our partners and customers. In support of this change, we plan to more than double our spend in support of our partner ecosystem over the next few years, including rolling out increased co-innovation resources for partners, more incentives and co-marketing funds, and a larger commitment to training and enablement—all with a goal of continuing our joint momentum in the market.Providing leads and new go-to-market programs for consulting partnersThe need for highly-skilled partners to accelerate digital transformation for customers has never been greater, and our ecosystem of services partners continues to gain tremendous opportunities to deliver high-value implementation and professional services, industry solutions, and digital transformation expertise. In 2022, we are investing in our SIs by:Moving to a partner-led, partner-delivered approach for professional services needed by our customers, particularly through expanded work with partners. This will include new programs for lead generation and lead sharing with our SI partners.Increasing our investment with SIs in deploying go-to-market programs for industry-specific SI solutions, as well as creating more pre-integrated industry ISV and Google Cloud AI solutions together with our SI partners.Accelerating critical training, specialization, and certification programs in support of our goal of training 40 million new people on Google Cloud. This includes new programs for experienced practitioners, and a hybrid learning modality that combines online and in-person learning supported by Google mentors. Accelerating growth for ISV partners with more resourcesIn 2021, our ISV partners helped build unique integrations with Google Cloud capabilities in AI, ML, data, analytics, and security for our mutual customers. In fact, our marketplace third-party transaction value was up more than 500% YoY from 2020 (Q1-Q3). In 2022, we are deepening our commitment to our ISV partners’ success by:Making significant investments in new Google Cloud Marketplace functionality, including adding new technical resources that will help accelerate how ISVs distribute their apps and solutions. Coupled with this, we’re also lowering the Marketplace rate to 3% for eligible solutions, helping drive more adoption with customers. Expanding our regional sales and technical teams who are dedicated to supporting ISVs, and at the same time increasing market development funds (MDF) to drive further sales growth for our ISVs.Dedicating additional technical resources to help ISVs move to more modern SaaS delivery models, as well as to optimize and supercharge their apps for their customers by leveraging Google Cloud technologies.Creating new monetization models for ISVs using Google Distributed Cloud to deliver products across hybrid environments, multiple clouds, and at the network edge. ISVs will be able to build industry-specific 5G and edge solutions leveraging our ecosystem of telecommunication providers and 140+ Google network edge locations.Increasing funds for ISVs to accelerate customer cloud migrations by offsetting infrastructure costs during migration (ISV Cloud Acceleration Program).Launching new program incentives to drive a thriving channelSince the launch of our Partner Advantage program, we have increased funds for our channel partners tenfold. In 2021, to extend this momentum, we expanded our incentive portfolio for resellers to support their long-term growth and profitability. In 2022, we are increasing our investment in partner programs even further, including:Significantly expanding incentives to reward partners who source and grow customer engagements, and for those who deliver exceptional customer experiences and critical implementation services.Evolving to industry-standard compensation plans for our direct sellers, and rewarding our channel partners for implementation (vs. reselling) for larger enterprise customers.Significantly increasing co-marketing funding for our channel partners to accelerate demand generation and time-to-close.Growing our learning resources, including launching more than 10 new Expertises and Specializations, and expanding our certification programs for partners to deliver the highest levels of Google Cloud expertise to customers.Launching a new program for resellers to support customers via offerings on the Google Cloud Marketplace.Sharing a toolkit to bring the best of Google’s diversity, equity, and inclusion (DEI) resources to our ecosystem of partners, including programs to develop inclusive marketing strategies and deploy DEI training within their own organizations.As we kick off 2022, it’s clear that the trend of digital transformation will only continue to drive customer demand for the cloud and, more importantly, a need for services, support, and solutions from our partners. We believe that by centralizing our partner groups into a single organization and by more than doubling our spend in support of our partner ecosystem over the next few years, we will help accelerate our joint momentum in the market around the world. For more information on these new programs and resources, please reach out to your Partner Account Manager or login to your Partner Advantage portal at partneradvantage.goog.

  • Are you a multicloud engineer yet? The case for building skills on more than one cloud
    by (Training & Certifications) on January 7, 2022 at 5:00 pm

    Over the past few months, I made the choice to move from the AWS ecosystem to Google Cloud — both great clouds! — and I think it’s made me a stronger, more well-rounded technologist.But I’m just one data point in a big trend. Multicloud is an inevitability in medium-to-large organizations at this point, as I and others have been saying for awhile now. As IT footprints get more complex, you should expect to see a broader range of cloud provider requirements showing up where you work and interview. Ready or not, multicloud is happening.In fact, Hashicorp’s recent State of Cloud Strategy Survey found 76% of employers are already using multiple clouds in some fashion, with more than 50% flagging lack of skills among their employees as a top challenge to survival in the cloud.That spells opportunity for you as an engineer. But with limited time and bandwidth, where do you place your bets to ensure that you’re staying competitive in this ever-cloudier world?You could pick one cloud to get good at and stick with it; that’s a perfectly valid career bet. (And if you do bet your career on one cloud, you should totally pick Google Cloud! I have reasons!) But in this post I’m arguing that expanding your scope of professional fluency to at least two of the three major US cloud providers (Google Cloud, AWS, Microsoft Azure) opens up some unique, future-optimized career options.What do I mean by ‘multicloud fluency’? For the sake of this discussion, I’m defining “multicloud fluency” as a level of familiarity with each cloud that would enable you to, say, pass the flagship professional-level certification offered by that cloud provider–for example, Google Cloud’s Professional Cloud Architect certification or AWS’s Certified Solutions Architect Professional. Notably, I am not saying that multicloud fluency implies experience maintaining production workloads on more than one cloud, and I’ll clarify why in a minute.How does multicloud fluency make you a better cloud engineer?I asked the cloud community on Twitter to give me some examples of how knowledge of multiple clouds has helped their careers, and dozens of engineers responded with a great discussion.Turns out that even if you never incorporate services from multiple clouds in the same project — and many people don’t! — there’s still value in understanding how the other cloud lives.Learning the lingua franca of cloudI like this framing of the different cloud providers as “Romance languages” — as with human languages in the same family tree, clouds share many of the same conceptual building blocks. Adults learn primarily by analogy to things we’ve already encountered. Just as learning one programming language makes it easier to learn more, learning one cloud reduces your ramp-up time on others.More than just helping you absorb new information faster, understanding the strengths and tradeoffs of different cloud providers can help you make the best choice of services and architectures for new projects. I actually remember struggling with this at times when I worked for a consulting shop that focused exclusively on AWS. A client would ask “What if we did this on Azure?” and I really didn’t have the context to be sure. But if you have a solid foundational understanding of the landscape across the major providers, you can feel confident — and inspire confidence! — in your technical choices.Becoming a unicornTo be clear, this level of awareness isn’t common among engineering talent. That’s why people with multicloud chops are often considered “unicorns'' in the hiring market. Want to stand out in 2022? Show that you’re conversant in more than just one cloud. At the very least, it expands the market for your skills to include companies that focus on each of the clouds you know.Taking that idea to its extreme, some of the biggest advocates for the value of a multicloud resumé are consultants, which makes sense given that they often work on different clouds depending on the client project of the week. Lynn Langit, an independent consultant and one of the cloud technologists I most respect, estimates that she spends about 40% of her consulting time on Google Cloud, 40% on AWS, and 20% on Azure. Fluency across providers lets her select the engagements that are most interesting to her and allows her to recommend the technology that provides the greatest value.But don’t get me wrong: multicloud skills can also be great for your career progression if you work on an in-house engineering team. As companies’ cloud posture becomes more complex, they need technical leaders and decision-makers who comprehend their full cloud footprint. Want to become a principal engineer or engineering manager at a mid-to-large-sized enterprise or growing startup? Those roles require an organization-wide understanding of your technology landscape, and that’s probably going to include services from more than one cloud. How to multicloud-ify your careerWe’ve established that some familiarity with multiple clouds expands your career options. But learning one cloud can seem daunting enough, especially if it’s not part of your current day job. How do you chart a multicloud career path that doesn’t end with you spreading yourself too thin to be effective at anything?Get good at the core conceptsYes, all the clouds are different. But they share many of the same basic approaches to IAM, virtual networking, high availability, and more. These are portable fundamentals that you can move between clouds as needed. If you’re new to cloud, an associate-level solutions architect certification will help you cover the basics. Make sure to do hands-on labs to help make the concepts real, though — we learn much more by doing than by reading.Go deep on your primary cloudFundamentals aside, it’s really important that you have a native level of fluency in one cloud provider. You may have the opportunity to pick up multicloud skills on the job, but to get a cloud engineering role you’re almost certainly going to need to show significant expertise on a specific cloud.Note: If you’re brand new to cloud and not sure which provider to start with, my biased (but informed) recommendation is to give Google Cloud a try. It has a free tier that won’t bill you until you give permission, and the nifty project structure makes it really easy to spin up and tear down different test environments.It’s worth noting that engineering teams specialize, too; everybody has loose ends, but they’ll often try to standardize on one cloud provider as much as they can. If you work on such a team, take advantage of the opportunity to get as much hands-on experience with their preferred cloud as possible.Go broad on your secondary cloudYou may have heard of the concept of T-shaped skills. A well-rounded developer is broadly familiar with a range of relevant technologies (the horizontal part of the “T”), and an expert in a deep, specific niche. You can think of your skills on your primary cloud provider as the deep part of your “T”. (Actually, let’s be real — even a single cloud has too many services for any one person to hold in their heads at an expert level. Your niche is likely to be a subset of your primary cloud’s services: say, security or data.)We could put this a different way: build on your primary cloud, get certified on your secondary. This gives you hirable expertise on your “native” cloud and situational awareness of the rest of the market. As opportunities come up to build on that secondary cloud, you’ll be ready.I should add that several people have emphasized to me that they sense diminishing returns when keeping up with more than one secondary cloud. At some point the cognitive switching gets overwhelming and the additional learning doesn’t add much value. Perhaps the sweet spot looks like this: 1< 2 > 3.Bet on cloud-native services and multicloud toolingThe whole point of building on the cloud is to take advantage of what the cloud does best — and usually that means leveraging powerful, native managed services like Spanner and Vertex AI. On the other hand, the cloud ecosystem has now matured to the point where fantastic, open-source multicloud management tooling for wrangling those provider-specific services is readily available. (Doing containers on cloud? Probably using Kubernetes! Looking for a DevOps role? The team is probably looking for Terraform expertise no matter what cloud they major on.) By investing learning time in some of these cross-cloud tools, you open even more doors to build interesting things with the team of your choice.Multicloud and youWhen I moved into the Google Cloud world after years of being an AWS Hero, I made sure to follow a new set of Google Cloud voices like Stephanie Wong and Richard Seroter. But I didn’t ghost my AWS-using friends, either! I’m a better technologist (and a better community member) when I keep up with both ecosystems. “But I can hardly keep up with the firehose of features and updates coming from Cloud A. How will I be able to add in Cloud B?” Accept that you can’t know everything. Nobody does. Use your broad knowledge of cloud fundamentals as an index, read the docs frequently for services that you use a lot, and keep your awareness of your secondary cloud fresh:Follow a few trusted voices who can help you filter the signal from the noiseAttend a virtual event once a quarter or so; it’s never been easier to access live learningBuild a weekend side project that puts your skills into practiceUltimately, you (not your team or their technology choices!) are responsible for the trajectory of your career. If this post has raised career questions that I can help answer, please feel free to hit me up on Twitter. Let’s continue the conversation.Related ArticleFive do’s and don’ts of multicloud, according to the expertsWe talked with experts about why to do multicloud, and how to do it right. Here is what we learned.Read Article

  • How to become a certified cloud professional
    by (Training & Certifications) on December 15, 2021 at 6:00 pm

    Achieving a certification is seen as a stamp of approval validating one's skills and expertise to perform a given job role. Google Cloud Certification program brings a framework to help equip organizations develop talent for the future. These certifications are not just about Google Cloud technologies. Just like the real-world, examinees are expected to know the vast array of technologies they may encounter in their day-to-day jobs. The question you might be asking yourself is: How do I become a certified cloud professional? First, let us share some tips with you on gaining hands-on experience with Google Cloud by introducing skill badges. Watch this video to learn more:The more skill badges you achieve, the stronger your readiness becomes.The next question you may be asking yourself is: should I go for the associate or the professional level exam?The associate level certification is focused on the fundamental skills of deploying, monitoring, and maintaining projects on Google Cloud. This certification is a good starting point for those new to cloud and can be used as a path to professional level certifications. Watch this video to learn about the Associate Cloud Engineer exam by Google Cloud.Professional certifications span key technical job functions and assess advanced skills in design, implementation, and management. These certifications are recommended for individuals with industry experience and familiarity with Google Cloud products and solutions.We’d recommend you start with reviewing the certification exam website and look for the descriptions of the role you think is most appropriate for you. The exam guide in particular is a helpful resource because it outlines the domains covered by the exam. As an example, check out the exam guide and the introduction video for the Professional Cloud Developer certification.Setting a goal of achieving a certification is a personal and professional milestone! As much as we wish all of you interested in Google Cloud certification best of luck in earning them, we have one final reminder: please study to learn, not just to pass. The learning mindset is what keeps the technology exploration journey interesting. Happy learning and send your questions our way on LinkedIn to Magda Jary and Priyanka Vergadia.

  • Azure Database for PostgreSQL – Hyperscale (Citus): New toolkit certifications generally available
    by Azure service updates on December 15, 2021 at 5:00 pm

    New Toolkit certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Machine learning, Google Kubernetes Engine, and more: 10 free training offers to take advantage of before 2022
    by (Training & Certifications) on December 13, 2021 at 5:00 pm

    We’re continuing to offer learning opportunities at no charge to help you grow your Google Cloud skills. Here are ten training offers you can take advantage of before the end of this year to keep building your knowledge of machine learning, Google Kubernetes Engine (GKE), and more. Register here by January 10, 2022 to receive 30 days no cost access to Google Cloud Skills Boost*. As the definitive destination for skills development, Google Cloud Skills Boost has 700 hands-on labs, role-based courses, skill badges, and certification resources. You’ll also be able to personalize learning paths, track progress, and validate your newly-earned expertise. For additional learning opportunities, check out our on-demand trainings below. Getting started with Google CloudTo learn about Google Cloud fundamentals,sign up for our introductory Cloud OnBoard. This comprehensive half-day training will take you through the ins and outs of some of Google Cloud's most impactful tools, how to maximize your VM instances, and the best ways to approach your container strategy.If you’ve just decided to add Google Cloud to your existing cloud skills from other providers like AWS or Azure, start your Google Cloud journey with this training. The training will show you to adapt your knowledge from other cloud providers and have a seamless transition. Machine learning and data analytics Discover how to use Vertex AI, Google Cloud’s new unified machine learning (ML) platform through two training opportunities.Sign up for the “Data Science on Google Cloud” training to learn how to analyze datasets, experiment with different modeling techniques, deploy trained models into production, and manage ML operations through the model lifecycle.Register here for an end-to-end demo on how to train and serve a custom TensorFlow model on Vertex AI. To find out how to use BigQuery data warehouse and Looker for data analytics, sign up here. You’ll be taught how to model, analyze, and visualize your data in less than 30 minutes. Kubernetes and serverless  New to Kubernetes or need a refresher?Take our "Getting Started with Kubernetes" training. You’ll have an opportunity to hear from Google Cloud experts like Kelsey Hightower, Bobby Allen, Kaslin Fields, and Maria Cruz as well access hands-on tutorials. Get hands-on experience with GKE by registering here. You’ll be taught how to manage workloads and clusters at scale so that you can optimize time and cost in this training. Register here to learn through demos and talks from Google Cloud executives and experts, how to use Autopilot, GKE’s new mode of operation, and see what’s in store for GKE in the future. Sign up for the "Power of Serverless" training to find how to run fast, error-free apps with Serverless App Acceleration. You’ll discover how to run internal apps, do real-time enterprise app data processing, and more on serverless. *To unlock your free 30 day access to Google Cloud Skills Boost, you have to first complete a lab. If you’re new to Google Cloud Skills Boost, you will need to create an account and then complete a lab to obtain your free access.Related ArticleBuild your data analytics skills with the latest no cost BigQuery trainingsTo help you make the most of BigQuery, we’re offering no cost, on-demand training opportunitiesRead Article

  • Join Cloud Learn to build your Google Cloud skills at no cost, regardless of experience level
    by (Training & Certifications) on December 1, 2021 at 5:00 pm

    We recently announced a new goal of equipping more than 40 million people with Google Cloud skills. To help achieve this goal, we’re hosting Cloud Learn from Dec. 8-9 (for those in Europe, the Middle East, or Africa, the event will be from Dec. 9-10 and for those in Japan, you can access the event here), a no-cost digital training event for developers, IT professionals, and data practitioners at all career levels. The interactive event will have live technical demos, Q&As, career development workshops, and more covering everything from Google Cloud fundamentals to certification prep. Here’s a more in-depth look at what to expect from Cloud Learn:Hear from Google Cloud executives and customers Thomas Kurian, Google Cloud’s CEO, and I will kick off the first day by discussing how you can uplevel your career. The second day will begin with technical leaders from Twitter, Lloyds Banking Group, and Ingka Group Digital speaking with John Jester, our vice president of customer experience, about the impact of Google Cloud training and certifications they’ve seen in their organizations. Afterwards, you can choose from role-based tracks and join the training sessions most relevant to you. Training for developers For developers, Kubernetes expert Kaslin Fields will be guiding you through the following trainings during the first day: Introduction to Building with Kubernetes, Create and Configure Google Kubernetes Engine (GKE) Clusters, Deploy and Scale in Kubernetes, and Securing GKE for Your Google Cloud Platform Access. Google customer engineers Murriel Perez McCabe and Jay Smith will discuss how to prepare for the Google Cloud Professional Cloud Developer and Professional Cloud DevOps Engineer certifications on the second day. Jay will also walk you through a live demo of how to build a serverless app that creates PDF files with Cloud Run. Carter Morgan, a Google Cloud developer advocate, will end the second day with a session on actionable strategies for managing imposter syndrome in tech. Learning opportunities for IT professionalsIT professionals will have the opportunity on day one to learn from Jasen Baker, a technical trainer, how to get started with Google Cloud. Jasen will walk you through how to execute compute, store, and secure your data as well deploy and monitor applications. On the second day, you can hear from Google Cloud Certified Fellow Konrad Clapa and Cori Peele, a Google Cloud customer engineer, about how to prepare for Google Cloud’s Associate Cloud Engineer and Professional Cloud Architect certifications.  Google Cloud experts will also take you through a live demo of how to create virtual machines that run different operating systems using the Google Cloud Console and the gcloud command line. Day two will conclude with a discussion from leadership consultant, Selena Rezvani, on how to negotiate for yourself at work, and speak up for what you want and need.Training sessions for data practitioners Lak Lakshmanan, Google Cloud’s analytics and AI solutions director, and product manager Leigha Jarett will show you how to use BigQuery, Cloud SQL, and Spark to dive into recommendation and prediction systems on the first day. They’ll also teach you how to use real time dashboards and derive insights using machine learning. Author Dan Sullivan and Google Cloud learning portfolio manager Doug Kelly will begin the second day with a discussion on how to earn Google Cloud’s Professional Data Engineer and Professional Machine Learning Engineer certifications. You’ll also learn how Google Cloud Video Intelligence makes videos searchable and discoverable by extracting metadata with an easy to use REST API through a live demo on day two. Cross cultural business speaker Jessica Chen will end the last day with actionable communication tips and techniques to lead in a virtual and hybrid world.Register here to save your virtual seat at Cloud Learn.Related ArticleTraining more than 40 million new people on Google Cloud skillsTo help more than 40 million people build cloud skills, Google Cloud is offering limited time no-cost access to all training contentRead Article

  • A learning journey for members transitioning out of the military
    by (Training & Certifications) on November 11, 2021 at 5:00 pm

    Each year, about 200,000 U.S. veterans transition out of military service. However, despite being well-equipped to work in the tech sector, many of these veterans are unable to identify a clear career path. In fact, a2019 survey found that many veterans feel unprepared for the job market after their service and are unaware of how GI benefits can be used for learning and training. That’s where Google Cloud skills training comes in. This August, 50 service members began a 12-week, Google Cloud-sponsored Certification Learning Journey toward achieving the Google Cloud Associate Cloud Engineer certification. Participants had access to online, on-demand learning assets, bi-weekly technical review sessions taught by Google engineers, and mentoring through Google Groups. Upon completion of the online training, participants will now attempt to pass the Google Cloud Associate Cloud Engineer certification exam. A passing grade will grant these military members a new cloud certification, which is a great way to demonstrate cloud skills to the larger IT market. Getting certified can open up internships, job opportunities and help career progression for our veterans.Why get Google Cloud certified? Cloud computing is one of the fastest-growingareas in IT. As cloud adoption grows rapidly, so do the ways that cloud technologies can solve key business problems in the real world. Cloud certifications are a great way to demonstrate technical skills to the broader market beyond the military. Cloud skills are also in demand. More than 90% of IT leaders say they're looking to grow their cloud environments in the next several years, yet more than 80% of those same leaders identified a lack of skills and knowledge within their employees as a barrier to this growth. Unfortunately, according to a 2020 survey report, the IT talent shortage continues to be a leading corporate concern, with 86% of respondents believing it will continue to slow down cloud projects. A shrinking pool of qualified candidates poses a top business risk for global executives as they struggle to find and retain talent to meet their strategic objectives.Want to learn more?As we wrap up this first Certification Learning Journey for Service Members, plans are underway to expand to new cohorts in the coming months. The classes are completely virtual, and all training is on-demand so that participants can access their coursework anytime, anywhere via the web or mobile device. To determine whether you (or someone you know) would be a great fit for this certification journey:Watch the Certification Prep: Associate Cloud Engineer webinarComplete the Skill IQ Assessment via PluralsightReview the Associate Cloud Engineer Certification Exam Guide Take the Associate Cloud Engineer sample questionWho’s eligible? U.S. military members transitioning out of service and veterans with CS/CIS related education or relevant work experience (IT, CyberSecurity, Networking, Security,  Information Systems) are eligible for the program. Although the ability to code is not required, familiarity with the following IT concepts is highly recommended: virtual machines, operating systems, storage and file systems, networking, databases, programming, and working with Linux at the command line.At Google Cloud, we are committed to creating training and certification opportunities for transitioning service members, veterans, and military spouses to help them thrive in a cloud-first world. Stay tuned for updates early next year!Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article

  • Video walkthrough: Set up a multiplayer game server with Google Cloud
    by (Training & Certifications) on October 29, 2021 at 4:00 pm

    Imagine that you’re playing a video game with a friend, hosting the game on your own machine. You’re both having a great time—until you need to shut down your computer and the game world ceases to exist for everyone until you’re back online. With a multiplayer server in the cloud, you can solve this problem and create persistent, shared access that doesn’t depend on your online status. To show you how to do this, we’ve created a video that takes you through the steps to set up a private virtual, multiplayer game server with Google Cloud, with no prior experience required.In this video, we walk through the real-world situation described above, in which one of our team members wants to create a persistent shared gaming experience with a friend. One of our training experts shows his colleague step-by-step how to use Compute Engine to host a multiplayer instance of Valheim from Iron Gate Studio and Coffee Stain Studios.This tutorial doesn’t assume that you’ve done this before. Along with our in-house novice, you’ll be guided through the process to create a virtual machine on Google Cloud Platform and configure it to connect to remote computers. Then, using Valheim as an example, we’ll show you how to set up a dedicated game server. The video also takes you through decisions about user settings and permissions, such as whether you want to allow multiple parties to manage the cloud host, and security considerations to keep in mind. We’ll talk about resource requirements and possibilities for scaling up, and break down some of the factors that will influence the cost, including a detailed explanation of the specifications we used in our walkthrough scenario. Ready to play? Check out the Create Valheim Game Server with Google Cloud walkthrough video:Related ArticleNew to Google Cloud? Here are a few free trainings to help you get startedFree resources like hands-on events, on-demand training, and skills challenges can help you develop the fundamentals of Google Cloud so y...Read Article

  • Azure VMware Solution achieves FedRAMP High Authorization
    by Azure service updates on September 15, 2021 at 11:53 pm

    With this certification, U.S. government and public sector customers can now use Azure VMware Solution as a compliant FedRAMP cloud computing environment, ensuring it meets the demanding standards for security and information protection.

  • Azure expands HITRUST certification across 51 Azure regions
    by Azure service updates on August 23, 2021 at 9:38 pm

    Azure expands offering and region coverage to Azure customers with its 2021 HITRUST validated assessment.

  • Azure Database for PostgreSQL - Hyperscale (Citus) now compliant with additional certifications
    by Azure service updates on June 9, 2021 at 4:00 pm

    New certifications are now available for Hyperscale (Citus) on Azure Database for PostgreSQL, a managed service running the open-source Postgres database on Azure.

  • Azure expands PCI DSS certification
    by Azure service updates on March 15, 2021 at 5:02 pm

    You can now leverage Azure’s Payment Card Industry Data Security Standard (PCI DSS) certification across all live Azure regions.

  • 172 Azure offerings achieve HITRUST certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure expands its depth of offerings to Azure customers with its latest independent HITRUST assessment.

  • Azure achieves its first PCI 3DS certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure’s PCI 3DS Attestation of Compliance, PCI 3DS Shared Responsibility Matrix, and PCI 3DS whitepaper are now available.

  • Azure Databricks Achieves FedRAMP High Authorization on Microsoft Azure Government
    by Azure service updates on November 25, 2020 at 5:00 pm

    With this certification, customers can now use Azure Databricks to process the U.S. government’s most sensitive, unclassified data in cloud computing environments, including data that involves the protection of life and financial assets.

  • New SAP HANA Certified Memory-Optimized Virtual Machines now available
    by Azure service updates on November 12, 2020 at 5:01 pm

    We are expanding our SAP HANA certifications, enabling you to run production SAP HANA workloads on the Edsv4 virtual machines sizes.

  • Azure achieves Service Organization Controls compliance for 14 additional services
    by Azure service updates on November 11, 2020 at 5:10 pm

    Azure gives you some of the industry’s broadest certifications for the critical SOC 1, 2, and 3 compliance offering, which is widely used around the world.

  • Announcing the unified Azure Certified Device program
    by Azure service updates on September 22, 2020 at 4:05 pm

    A unified and enhanced Azure Certified Device program was announced at Microsoft Ignite, expanding on previous Microsoft certification offerings that validate IoT devices meet specific capabilities and are built to run on Azure. This program offers a low-cost opportunity for device builders to increase visibility of their products while making it easy for solution builders and end customers to find the right device for their IoT solutions.

  • IoT Security updates for September 2020
    by Azure service updates on September 22, 2020 at 4:05 pm

    New Azure IoT Security product updates include improvements around monitoring, edge nesting and the availability of Azure Defender for IoT.

  • Azure Certified for Plug and Play is now available
    by Azure service updates on August 27, 2020 at 12:21 am

    IoT Plug and Play device certification is now available from Microsoft as part of the Azure Certified device program.

  • Azure France has achieved GSMA accreditation
    by Azure service updates on August 6, 2020 at 5:45 pm

    Azure has added an important compliance offering for telecommunications in France, the Global System for Mobile Communications Association (GSMA) Security Accreditation Scheme for Subscription Management (SAS-SM).

  • Azure Red Hat OpenShift is now ISO 27001 certified
    by Azure service updates on July 21, 2020 at 4:00 pm

    To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now ISO 27001 certified.

  • Azure Lighthouse updates—April 2020
    by Azure service updates on June 1, 2020 at 4:00 pm

    Several critical updates have been made to Azure Lighthouse, including FEDRAMP certification, delegation opt-out, and Azure Backup reports.

  • Azure NetApp Files—New certifications, increased SLA, expanded regional availability
    by Azure service updates on May 19, 2020 at 4:00 pm

    The SLA guarantee for Azure NetApp Files has increased to 99.99 percent. In addition, NetApp Files is now HIPAA and FedRAMP certified, and regional availability has been increased.

  • Kubernetes on Azure Stack Hub in GA
    by Azure service updates on February 25, 2020 at 5:00 pm

    We now support Kubernetes cluster deployment on Azure Stack Hub, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS Engine on Azure Stack Hub.

  • Azure Firewall Spring 2020 updates
    by Azure service updates on February 19, 2020 at 5:00 pm

    Excerpt: Azure Firewall is now ICSA Labs certified. In addition, several key Azure Firewall capabilities have recently been released into general availability (GA) and preview.

  • Azure IoT C# and Java SDKs release new long-term support (LTS) branches
    by Azure service updates on February 14, 2020 at 5:00 pm

    The Azure IoT Java and C# SDKs have each now released new long-term support (LTS) branches.

  • HPC Cache receives ISO certifications, adds stopping feature, and new region
    by Azure service updates on February 11, 2020 at 5:00 pm

    Azure HPC Cache has received new SO27001, 27018 and 27701 certifications, adds new features to manage storage caching in performance-driven workloads and expands service access to Korea Central.

  • Azure Blueprint for FedRAMP High now available in new regions
    by Azure service updates on February 3, 2020 at 5:00 pm

    The Azure Blueprint for FedRAMP High is now available in both Azure Government and Azure Public regions. This is in addition to the Azure Blueprint for FedRAMP Moderate released in November, 2019.

  • Azure Databricks Is now HITRUST certified
    by Azure service updates on January 22, 2020 at 5:01 pm

    Azure Databricks is now certified for the HITRUST Common Security Framework (HITRUST CSF®), the most widely coveted security accreditation for the healthcare industry. With this certification, health care customers can now use volumes of clinical data to drive innovation using Azure Databricks, without any worry about security and risk.

  • Microsoft plans to establish new cloud datacenter region in Qatar
    by Azure service updates on December 11, 2019 at 8:00 pm

    Microsoft recently announced plans to establish a new cloud datacenter region in Qatar to deliver its intelligent, trusted cloud services and expand the Microsoft global cloud infrastructure to 55 cloud regions in 20 countries.

  • Azure NetApp Files HANA certification and new region availability
    by Azure service updates on November 4, 2019 at 5:00 pm

    Azure NetApp Files , one of the fastest growing bare-metal Azure services, has achieved SAP HANA certification for both scale-up and scale-out deployments.

  • Azure achieves TrueSight certification
    by Azure service updates on September 23, 2019 at 5:00 pm

    Azure achieved certification for TruSight, an industry-backed, best-practices third-party assessment utility.

  • IoT Plug and Play Preview is now available
    by Azure service updates on August 21, 2019 at 4:00 pm

    With IoT Plug and Play Preview, solution developers can start using Azure IoT Central to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play.

  • View linked GitHub activity from the Kanban board
    by Azure service updates on June 21, 2019 at 5:00 pm

    We continue to enhance the Azure Boards integration with GitHub. Now you can get information of your linked GitHub commits, pull requests and issues on your Kanban board. This information will give you a quick sense of where an item is at and allow you to directly navigate out to the GitHub commit, pull request, or issue for more details.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on April 2, 2019 at 9:08 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on March 26, 2019 at 9:06 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Azure South Africa regions are now available
    by Azure service updates on March 7, 2019 at 6:00 pm

    Azure services are available from new cloud regions in Johannesburg (South Africa North) and Cape Town (South Africa West), South Africa. The launch of these regions is a milestone for Microsoft.

  • Azure DevOps Roadmap update for 2019 Q1
    by Azure service updates on February 14, 2019 at 8:22 pm

    We updated the Features Timeline to provide visibility on our key investments for this quarter.

  • Azure Stack—FedRAMP High documentation now available
    by Azure service updates on November 1, 2018 at 7:00 pm

    FedRAMP High documentation is now available for Azure Stack customers.

  • Kubernetes on Azure Stack in preview
    by Azure service updates on November 1, 2018 at 7:00 pm

    We now support Kubernetes cluster deployment on Azure Stack, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS-Engine on Azure Stack.

  • Azure Stack Infrastructure—compliance certification guidance
    by Azure service updates on November 1, 2018 at 7:00 pm

    We have created documentation to describe how Azure Stack infrastructure satisfies regulatory technical controls for PCI-DSS and CSA-CCM.

  • Logic Apps is ISO, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant
    by Azure service updates on July 18, 2017 at 5:05 pm

    The Logic Apps feature of Azure App Service is now ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant.

  • Apache Kafka on HDInsight with Azure Managed Disks
    by Azure service updates on June 30, 2017 at 3:44 pm

    We're pleased to announce Apache Kafka with Azure Managed Disks Preview on the HDInsight platform. Users will now be able to deploy Kafka clusters with managed disks straight from the Azure portal, with no signup necessary.

  • Azure Backup for Windows Server system state
    by Azure service updates on June 14, 2017 at 10:54 pm

    Customers will now be able to to perform comprehensive, secure, and reliable Windows Server recoveries. We Will be extending the data backup capabilities of the Azure Backup agent so that it will now integrate with the Windows Server Backup feature, available natively on every Windows Server.

  • Azure Data Catalog is ISO, CSA STAR, HIPAA, EU Model Clauses compliant
    by Azure service updates on March 7, 2017 at 12:00 am

    Azure Data Catalog is ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, and EU Model Clauses compliant.

  • Azure compliance: Azure Cosmos DB certified for ISO 27001, HIPAA, and the EU Model Clauses
    by Azure service updates on March 25, 2016 at 10:00 am

    The Azure Cosmos DB team is excited to announce that Azure Cosmos DB is ISO 27001, HIPAA, and EU Model Clauses compliant.

  • Compliance updates for Azure public cloud
    by Azure service updates on March 16, 2016 at 9:24 pm

    We’re adding more certification coverage to our Azure portfolio, so regulated customers can take advantage of new services.

  • Protect and recover your production workloads in Azure
    by Azure service updates on October 2, 2014 at 5:00 pm

    With Azure Site Recovery, you can protect and recover your production workloads while saving on capital and operational expenditures.

  • ISO Certification expanded to include more Azure services
    by Azure service updates on January 17, 2014 at 1:00 am

    Azure ISO Certification expanded to include SQL Database, Active Directory, Traffic Manager, Web Sites, BizTalk Services, Media Services, Mobile Services, Service Bus, Multi-Factor Authentication, and HDInsight.

Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Facebook, Instagram, Apple and Google Apps Search Ads Secrets – Make Money From Your Products

Pay Per Click - google Facebook Instagram Twitter

A bit about search ads first.

There are billions of Apps and products out there and it is becoming harder and harder to stand out. You don’t want to spend countless of hours developing your dream app or products just to have close to zero sale per month.

This blog is an aggregate of the best secrets of Apple and Google Apps search ads for successful App developers.

2022 AWS Cloud Practitioner Exam Preparation

This blog also includes tips and tricks for successful Google Search Ads, Facebook Search Ads and Instagram Search Ads for any product.

Facebook, Instagram, Apple and Google Apps Search Ads Secrets - Make Money From Your Products
Google Search Ads For Apps Secrets

Apple Search Ads uses a Cost-Per-Tap (CPT) model, meaning that advertisers need to pay Apple every time someone “taps” on a Search Ad listing after performing a keyword search. While on other traditional mobile ad networks such as Google UAC or Facebook Ads, the advertiser usually pays per app install (Cost-Per Install model, or CPI) after a user saw or interacted with an ad.

Apple offers 2 types of search ads – basic and advanced. Which one should you choose?

I guess it depends upon the type of app and installs you want. Basic is CPI based vs Advanced is CPT based. This might make you think that Basic is better because you only pay when you get an install BUT that’s not the best way of looking at it. Basic has a much higher cost per install CPI than the cost per tap CPT you have from the advanced one. So unless your user either buys an IAP or paid app which makes more money than the CPI you paid to acquire that user, you might lose money.

Also, advanced lets your focus on specific keywords whereas Basic is mostly Apple’s own hidden algorithm showing your ads. Focusing on specific keywords is important because you don’t just want user to download the app, you want them to open and use it too. Since we don’t know how Apple will show your ad for basic, you have no clue whether your app is getting perfectly targeted.

So you may or may not be paying more money for the install using Basic vs Advanced as advanced can get you a lot more impressions of the ad (and more downloads if your metadata is on point).

Apple Search Ads is an intent-based channel

This is important in the post-IDFA era because Apple looks at the context of a particular search to target ads based on keywords. By its very nature, ASA does not rely on IDs to target individuals. Attribution models already have an advantage over other channels that rely on IDs for individual behavioural targeting.


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

With Apple Search Ads, you can tap into user intent signals that match your offerings and attract higher-quality users. That’s why Apple claims such impressive performance numbers, such as 50 percent average conversion rates and 65 percent download rates.

A bit about search ads first.

I personally would never run Basic for a free app (even if it has IAP) as the CPI is very high and unless I have a high conversion rate for the IAP, I would be losing money. For a paid app, it might work well though.

I have mostly tested Advanced. I did run Basic but the CPI was way too high so I stopped it. For advanced, I would advice:

Start small but not too small. Like don’t set a daily budget of under $5 or over $20. Start with lets say $10 and keep it like that for 1-2 weeks and see how it works. Adjust the keywords in the search ad, adjust your screenshots, icon and other metadata to make it look more attractive if you notice people are clicking on the ad but not tapping the download button etc.

Before running search ads, make sure you have your freemium app monetization and DAU (active users) absolutely down. Like if you only have banner ads in the app and no way for user to buy the in app purchase, don’t bother with search ads yet if your cost per acquisition is too high. For example if your CPA is $2 in an extremely competitive app category, and you spend $2 to acquire a new user or you waste $2 on a user who taps on the ad but doesn’t hit download. You may never make your money back from your ads in the app. Banner ads aren’t even worth it imo unless you have thousands of active users. They hardly make a few pennies per 1000 impressions. Interstitial ads are better and make more money and Rewarded ads are even better. But still, you need to look at numbers to see whether you are at least breaking even.

Apple and Google gives you $100 credit for free to try it out, so use that to test it out and look at numbers, make changes etc.

Set the search ad settings correctly. There is an option for targeting audience – whom would you like to see your ad and options are “People who already have your app“, “People who don’t have your app” etc. Of course you don’t want to select the first option because they already have your app. You want to acquire new users. You can also choose the age of the audience. So for example, if you have an app which you is meant for people who own houses, you don’t want to target people under 25 or even 30 years old because most of them won’t own houses.

If you are getting taps (you spend money per tap) but not conversions (downloads), that means people are finding something on your app store page which they don’t like. This could be bad or missing reviews, bad screenshots, bad metadata etc. So get honest opinion from non-friends to see what they think of your app store page.

Search ads for paid apps OR apps with in app purchases is different than search ads for free apps. You should make sure your paid app OR IAP is priced right so that you can at least break even and preferably make a profit for every cost per acquiring the customer. For example – if your cost per acquisition is $5 (this can be pretty high for paid apps as a lot of people will often click and ad but then decide not to download the app maybe because of the pricing or some other metadata) and you have priced your app at $2.99, you are just burning money. Be intelligent.

Using keywords of other app names in same category might work for you. But I won’t suggest setting keywords for trademarked apps OR of popular apps which have nothing to do with your app category. This can get you called out for IP/Copyright/Trademark violation. This also won’t convert well because when people are search for a specific app (let’s say Facebook) and your calculator app shows up in the ad, no body is going to click on it as the user obviously is only looking to download Facebook.

I personally don’t like running ads in developing countries as – Admob pays very little in those countries, people don’t buy IAP much, people don’t buy paid apps much.

Don’t bid for keywords which have high competition OR very high CPT. Companies with deep pockets will kill you.

I am not a fan of the option “Search Match” (Automatically match my ad to relevant searches) which Apple gives you. I always disable that option.

Search ads are good if you can afford it and if you have an app which fits the profile. It may or may not work for every app. Always look at numbers.

I’m guessing search ads are the ads you see in the App Store when you are searching for specific apps?

Yes, search ads are for the app store search. So if someone searches for a keyword which you have targeted your ad towards and you win the bidding battle for the ad space for the same keyword against someone else, your app’s ad gets shown.

Is there an average price per click that you pay?

Yes, Apple search ads are CPT based. Cost per tap. So if someone taps your ad, you pay what you won the bid for against some other person’s ads bid. For example – If you bid for a keyword “car” and you have set the maximum CPT at $0.20 and Bob who is also an app developer and is running ads and has set his “car” keyword at a CPT of $0.10, you will pay $0.11 because that’s what it took to win. Of course there are more factors – level of competition for that keyword, higher levels of CPT being bid by others etc which can drive the average CPT higher for you. That’s why you get to set the maximum you are willing to pay per keyword.

How many people searching for apps, see my game as an ad, and click on it per day for $10?

There is no general range of how many people might. You can use the maximum CPT to control the amount you spend per tap and you can also set an optional CPA (cost per acquisition) to ensure you don’t run at a loss. However, the first 2 weeks should usually be experimental and test it out with low budgets.

A very important thing to remember – you pay per tap – NOT per download. So if someone taps your ad and notices your screenshots look like crap and doesn’t download your app, you just lost money. This is why you need the metadata to be perfect and use the CPA field after 2 weeks to make sure you don’t run at loss.

Along with that, do you only pay for clicks? Do you pay more if they download your app after the click?

Yes you pay per click (tap to be technically correct). You don’t pay more if they download.

I’m assuming you are constantly tracking How many active users you have and how much revenue you are generally getting to be able to ball-park any change in these numbers based off your ads being displayed.

Yes, I always monitor my ad spend and compare it to how many downloads I got (if this is for a paid app) or how many people bought the IAP and how much revenue I am making per day via Admob. I do this every morning. Unfortunately, Apple doesn’t seem to let me track how many of those ad conversions converted into buying the in app purchase. So this throws me off a bit.

So, your CPA. Is this your cost for running the ads per download?

Regarding CPA. They let you set an optional CPA goal when running your ad campaign. Determining it is a bit of work. Like when I am starting out, I don’t have any numbers to look at, so I leave the CPA blank or set it as the same price as my IAP or paid app price. Basically I don’t want the cost per acquisition to exceed the IAP or paid app price because that would mean I am burning money and running at a loss instead of profit. However after running the campaign for 1-2 weeks and looking at the numbers for each day, I can guess a better CPA and if I think I definitely don’t want to exceed a certain number because it would make me lose money instead of break even/profit, I will set it. You don’t want to set the CPA too low – at least initially because then you won’t even get any impressions of your ads. For example: Looking at one of my ad campaigns right now, I have default CPT of $0.10 (cost per tap as you pay every time someone taps your ad – doesn’t matter whether they download or not). They let you set CPT on a per keyword basis too which overrides the default CPT. NOTE that CPT is the maximum amount you are willing to pay for the tap. This means that if you are at a battle with someone else who also wants the same ad space, you can win the battle if your CPT is even a cent higher. You only pay whatever amount it takes to win the battle, not the highest one which you have set your CPT at. So often, your Average CPT will be lesser than what you set it at which is good. So for this campaign, my default CPT is $0.10 and I have a few keywords with custom CPT of $0.20. After looking at my numbers for the past few weeks, I see that for most of my keywords, I have Average CPT of $0.15, $0.16, $0.19 and average CPA of $0.15, $0.33, $0.29. So if I want, after testing it for couple weeks, I can lower the CPA to $0.50 so that I never run it at a loss.

So if I spend 10 dollars in 1 day and 5 people downloaded the app, that would be a $2 CPA? Yes.

And I will repeat my previous statement: I always monitor my ad spend and compare it to how many downloads I got (if this is for a paid app) or how many people bought the IAP and how much revenue I am making per day via Admob. I compare and set the CPA based off of these. I do this every morning. Unfortunately, Apple doesn’t seem to let me track how many of those ad conversions converted into buying the in app purchase. So this throws me off a bit.

Have you been able to verify your numbers and whether or not you are profiting based off these ads? Why not bump your ad spending even higher?

I have made money from certain types of apps and lost money by doing stupid stuff (running ad campaigns for a free with ads app but not having an IAP to remove ads, running ad campaigns for apps with only poverty banner ads and no full screen/interstitial/rewarded video ads which at least make some money, running ad campaigns for apps with generic keywords which are very high competition and gets out-bid by much bigger players with much deeper pockets, running ads where my CPA was higher than the money I was making off of the IAP or Paid app, running ad campaigns with a keyword which was for an app not even in my category which made users tap my ad, lose money and then they won’t download, running campaign with a keyword which was trademarked etc).

Basically, be intelligent, research, start slow and experiment with the $100 credit Apple gives you.

A few people asked me about rewarded ads vs interstitial ads for monetization. This is a bit off topic but I will throw this in.

Rewarded ads have a higher eCPM than regular interstitial ads, meaning you get paid more. Of course how high depends upon the type of app, number of users, placement of ads etc. I use Admob’s rewarded ads to mostly unlock features or number of XXX item usage in the app. There are other companies which offer them too. You can read a few points here for example:

source: reddit

Rewarded Video Ads | ironSourceRewarded video ads are a great mobile video advertising strategy to increase ad revenue & improve user experience. Learn how to monetize with video rewards.

The high eCPM is good. What’s even better about them than regular interstitial is that they just provide a better user experience and less negative reviews. This is because the user is willingly choosing to watch an ad instead of their game getting randomly interrupted. And in return, the user gets some type of in app reward – more coins, unlock some feature etc. So this is a win win for the developer and the user.

How do you determine your CPA for an app with IAPs? (Like does iTunes Connect tell you this information?)

They let you set an optional CPA goal when running your ad campaign. Determining it is a bit of work. Like when I am starting out, I don’t have any numbers to look at, so I leave the CPA blank or set it as the same price as my IAP or paid app price. Basically I don’t want the cost per acquisition to exceed the IAP or paid app price because that would mean I am burning money and running at a loss instead of profit.

However after running the campaign for 1-2 weeks and looking at the numbers for each day, I can guess a better CPA and if I think I definitely don’t want to exceed a certain number because it would make me lose money instead of break even/profit, I will set it.

You don’t want to set the CPA too low – at least initially because then you won’t even get any impressions of your ads.

For example:

Looking at one of my ad campaigns right now, I have default CPT of $0.10 (cost per tap as you pay every time someone taps your ad – doesn’t matter whether they download or not). They let you set CPT on a per keyword basis too which overrides the default CPT. NOTE that CPT is the maximum amount you are willing to pay for the tap. This means that if you are at a battle with someone else who also wants the same ad space, you can win the battle if your CPT is even a cent higher. You only pay whatever amount it takes to win the battle, not the highest one which you have set your CPT at. So often, your Average CPT will be lesser than what you set it at which is good.

So for this campaign, my default CPT is $0.10 and I have a few keywords with custom CPT of $0.20.

After looking at my numbers for the past few weeks, I see that for most of my keywords, I have Average CPT of $0.15, $0.16, $0.19 and average CPA of $0.15, $0.33, $0.29.

So if I want, after testing it for couple weeks, I can lower the CPA to $0.50 so that I never run it at a loss.

So essentially with $2,000 its possible to have 10,000+ people click on your ad? That seems like a solid conversion rate if at least 1/10th of them download the app.

Depending upon the type of app, your CPT can vary. For me most of them have been about 20 cents. So yes, 10000 taps from $2000 is a good estimate. However – these are taps – not downloads. For downloads, you need to make sure your metadata is on point! Also you need to have monetization is place – IAP, paid apps etc to make sure you are actually making money off of these users which you are spending money to acquire.

How long did it take for you to start seeing impressions? We have pretty competitive keywords so i’m using extremely high CPT. $10+ and i’m still not seeing any impressions. It’s been 24 hours.

If you haven’t setup scheduled ads, it should be quick. I had mine within an hour if I remember right. I would suggest trying for less competitive keywords though.

What’s your experience and tips for driving iOS game app downloads via paid ads platforms like Facebook Ads, Apple Search Ads, Youtube ads, etc…?

No experience but as a iPhone user i often see myself downloading apps while browsing instagram. So I’d assume you’ll be spot on with instagram/snapchat/tiktok or maybe even youtube shorts.

App Store search ads keyword match types

Search Ads involve three different types of keyword matches.

They are ways for you to tell Apple whether you want to bid on keywords exactly as you enter them or more broadly. This is influenced by campaign goals and will ultimately determine campaign results. So you must first understand the different types of keyword matches Apple offers.

Broad Match

Broad match is the default keyword match type. By selecting broad match, you are telling Apple that you want to bid on the keywords you select and other keywords that are broadly related to them.

Broad match includes misspellings, plurals, closely related words, synonyms, related searches, related phrases, and translations.

For example, when you type “Friends,” Apple also considers variations of “Friend,” “Amigo,” “Freind,” and more.

Exact match

Exact match helps you narrow your ad bid spread. By choosing exact match, you’re telling Apple that you want to bid exactly as entered for the selected keyword.

Common misspellings and plural forms will also be taken into account.

For example, when you type “friends,” Apple will consider “friends” and “friends.

Search matching

Search matches are best suited for keyword discovery. By selecting Search Match, you allow Apple to use its metadata to automatically match your app to relevant keywords and search terms.

For Search Match to work, your app’s metadata needs to be up to date and optimized. This means that App Store optimizations have been completed and recently updated. In this way, Apple can easily pull information about your app and generate the best and most relevant keywords.

App Store Search campaign types

When creating an account to start keyword bidding, ASA best practice is to split your keywords into four different campaign types: Generic, Branded and Competitor, and Discovery.

Generic Campaigns

Typically set to broad match, generic campaigns use keywords that are relevant to your app. For example, if you have a fitness app, you should include keywords such as “fitness” or “exercise” in this campaign. The purpose of the general campaign is to attract high intent app store visitors.

Branded campaigns

You will want to use a brand campaign to reach a more specific audience searching for your brand in the App Store, drive reinstalls and brand protection. Your keywords in this campaign will be keywords related to your brand name or a variation thereof. By bidding generously on your branded keywords, you ensure that your competitors don’t take this valuable space away from you.

Competitor activity

Set up exact matches, competitor campaigns to target App Store users who are searching for competitors. Keywords for these types of campaigns include your direct competitor’s name or a variation of their name.

Discovery campaigns

You need to set up a discovery campaign to discover new keywords or find alternative keywords that you are not using in other campaigns.

To maximize the effectiveness of a Discovery campaign, new keywords from Discovery should be added as exact match keywords to the other three campaign types, and all keywords from branded, generic, and competitor campaigns should be added as negative keywords in Discovery.

Best practices for using Apple Search Ads

Getting started with Apple Search Ads isn’t a problem. But you need to make sure you adopt some best practices that will ultimately help you make the most of your investment. Here are some App Store advertising best practices you should follow when using Apple Search Ads.

Review app metadata before launching a campaign

Before launching a new campaign, you’ll want to visit App Store Connect and take a closer look at app metadata. The appearance of your ads will be based on your app’s metadata, and you won’t be able to change it later. Keep in mind that the same ad is unlikely to be shown to every user. Some people may get a simple description of the app, while others will see screenshots and preview videos.

USP-based targeted keywords

This is very important for marketers using ASA Advanced. You need to do some research and identify keywords that will increase installs. For example, if you have a fitness tracking app, use keywords like “fitness tracker” or “diet plan” as keywords. You must understand the search patterns of your audience because it can greatly improve your conversion rate.

You can always expect higher competition with general keywords, but if you can find more specific keywords, they will not only be cheaper to bid on, but will also have a higher conversion rate.

Tip: Use the keyword research in your ASO strategy to understand your options and sync your goals!

Use the 80/20 budget allocation method for App Store promotions

When comparing keywords, you must split your keywords between broad match and exact match. 80% of your spend should go to exact match and the remaining 20% should go to broad match. Both will be used primarily for discovery campaigns to identify keywords that perform better than others.

Exact match keywords will allow you to attract and convert interested users. They will be easier to convert and more likely to generate more revenue. They may cost more, but they will also pay off. Ideally, you should allocate an 80/20 budget to get the maximum return. Once you start generating interest, you can also reduce your budget allocation.

How to leverage your app business within ASO and ASA on iOS app store?

The great thing about Apple Search Ads is that you can use the search match feature to identify new keywords. When Search Match is enabled, your ads are automatically matched to new search terms based on metadata in your App Store listings, information about similar apps of the same type, and other available search data.

The ability to check keyword relevancy is an invaluable part of Apple Search Ads. In just a few hours, you can run a small test campaign to collect data and get a complete picture of which keywords to optimize for in your ASO efforts. By analyzing Tap Through Rate (similar to Click Through Rate on the web), in-store conversion rates, and actual downloads, you can begin to develop a more effective ASO strategy. In addition, you can use attribution tools to explore the LTV of each keyword for campaign analysis.

ASA can help you narrow down your ASO strategy, but it’s not a gold mine; ASO is a long-term strategy, and your goal should be to keep increasing natural downloads. A key learning point is to look at ASA data from a longer-term perspective so you can see the true trends and performance of each keyword.

Apple Search Ads only work if you know how to properly target your keywords. To ensure maximum app visibility and download rates, you need to target specific and general keywords and carefully determine how much you are willing to bid for each keyword. An easy way to find keywords is to use a tool that automatically compiles a list of targeted keywords. You should increase your bids until you reach your cost-per-acquisition target and start winning downloads from popular keywords related to your niche.

Unfortunately, simply outbidding your competitors for high-volume keywords isn’t enough to win the number one spot, because Apple also considers the relevance of your app to the keyword. To ensure you always rank #1, you need to combine winning bids with ASO optimization. Factors that affect your ASO include app name, URL, description, reviews, and ratings.

Source: How to Leverage ASA to Boost Your App Visibility?

So, how should you optimize your Search Ads campaigns for profitability?

1. Cost-Per-Acquisition (CPA) Goal:

The first thing you need to determine is how much you can afford to spend for every Search Ads install, so how much your target CPI (Cost-Per-Install) or Cost-Per-Acquisition (CPA) Goal — as Apple names it — should be. Note the difference in naming here: unlike other networks, Apple uses the word “Acquisition” and not “Install” because they actually only measure when users hit download and not when they have actually fully installed the game (we will hear more on that important difference later in this article).

To do this, if you are already running campaigns on other networks, you know your customer LTV (lifetime value), or how much every user will spend on average in your game.

Let’s say your game net LTV is $6 for iOS users in the United States.

On Apple Search Ads, you can either set your bids based on a Max CPT (Cost-Per-Tap) you are willing to pay or choose a CPA Goal, which means Apple will try to display your ads automatically and maximize conversions. But we don’t recommend that option because, while it will make sure you don’t go above your target CPA, it will limit your impressions quite a lot so you will miss out on several opportunities to convert.

So, for Max CPT, we usually apply a 30% ratio of the LTV of the game we’re promoting, because we normally observe an average 30% conversion rate (from taps to installs) on Search Ads.

In that case, we would be using:

Max CPT Bid = $6 x 30% = $2

Source: Medium

 Measuring your ROAS:

Now comes the most important part: What’s the revenue generated from your Search Ads campaigns?

Apple doesn’t track (or share) any detailed activity coming from the Search Ads installs they have provided you. So you will have to use your MMP for that.

Depending on the LTV curve of your game, you’d be looking at your Day 7, 15, 30 etc. ROAS (Return on Ad Spend) on a campaign, ad group or keyword level.

Cohort Reports for Search Ads Campaigns in Adjust

Let’s say you use Day-7 as a goal, you will then be doing this calculation:

Day-7 ROAS = Day-7 MMP Revenue / Search Ads Spend

And then compare that your Day-7 ROAS goal. If it’s above that, that’s a good sign and you should keep your campaigns/ad-groups active but make sure you monitor the retention of these users in the long run to validate their good performance.

If it’s below your goal, let’s say by more than 25%, then you should consider pausing or reducing the spend on these ad groups or campaigns.

That’s the formal way of assigning and reporting revenue coming from Search Ads.

But you have to take into consideration the installs that are not seen by your MMP and which may have also generated revenue.

ROAS = ((Revenue) * (1 + LAT Rate x 50%)) / Search Ads Spend

Bid Optimization:

Once you have launched your campaigns, give it a few days and then look at the performance of the ad groups you have created.

The first thing you need to check is if the keywords you have selected convert to installs. If there are ad groups with a Conversion Rate below 25%-20% it means that the keywords you have chosen are either too broad or not relevant. You should then consider pausing or reducing the bid on these ad groups.

On the contrary, for ad groups and keywords that have a high Conversion Rate, for example anything above 30%, you should increase your bid for as long as it’s aligned with your projected ROAS. In order to know how much is necessary, in the Search Ads interface, Apple suggests a bid range to have an indication of how much you should spend to match or beat your competitors. You should adjust your bids for every keyword that are are below the suggested bid ranges (as long as it stays within your target CPA goals).

Many factors affect how your Apple Search Ads Basic app promotions perform, including relevancy, your maximum cost-per-install (max CPI) amount compared to your competitors, and user response to your ad. The following best practices can help improve your app promotion results.

  • Review your metadata in App Store Connect to ensure it’s the best representation of your app. Your app title, descriptions, and keywords are all considerations Apple Search Ads uses to assess your app’s relevance for specific search queries, so you should take great care in crafting them. Apple Search Ads Basic also uses the app name, subtitle, description, preview videos, and screenshots approved for your App Store product page to create your ad. Take the time to review your app metadata in App Store Connect before you start using Apple Search Ads Basic.

    Review App Store metadata best practices

    Note that if you change your App Store metadata, it can take up to 24 hours to be reflected in the ad preview within your account, and up to two hours to be reflected in your ad on the App Store.
  • Take a look at your ad creative. It can play a key role in your app promotion performance. Because Apple Search Ads uses the app name, subtitle, description, preview videos, and up to the first three screenshots approved for your App Store product page to create your ad, you may want to consider adjusting these assets if your ad isn’t performing well.
  • Consider your product page, too, as it can also help drive installs. With three app previews, 10 screenshots, and new text fields, product pages offer more opportunities to showcase your work.
  • If your ad isn’t delivering results, try raising your max CPI to increase the likelihood of your ad being shown. You can use the suggested max CPI in your dashboard as a guide to help determine the right amount.
  • Consider running your app promotion in all the countries and regions where your app is available. This will give you more opportunities to reach interested customers. Check your monthly budget to make sure you’re reaching as many customers as possible. You may need to increase your budget, especially if you’re running app promotions in multiple countries and regions.
  • Make sure you’re using the right business model. The right business model for your app balances your goals with the expectations of key audiences, and can also affect the performance of your app in App Store search, including with Apple Search Ads. If you’ve tried the above and still aren’t seeing results, it’s a good idea to review App Store best practices. Learn more here…

Google Search Ads Optimization Techniques

Tips for Scaling a performing Google Search Campaign

Don’t dedicate an entire campaign for a top-performing keywords.

How long did you “test[ed] simply raising budget” for? Are we talking about a week, month, multiple months?

Here are some other options for you:

  • Review your Impression Share and top of page rate metrics (Impr. (Top) % and Impr. (Abs. Top) %). Are these trending in the right direction? Are you losing out due to budget on high-performing campaigns? How do your ads perform when you’re placing above organic search results vs below (aka “Other”)?
  • Look at 30-, 60-, and 90-day windows for things like audiences, demographics, and locations. Are there options here that are high-spending but underperforming, and could be excluded? This would allow, moving forward, al of the budget to be spent on better-performing targeting options.
  • Consider testing new ad copy. If you can achieve stronger CTR, this allows you to generate traffic within the existing impression volume.
  • My preferred setup is to group keywords by a shared intent. I have B2B SaaS clients, so the majority of my campaigns are all focused on very high-intent searches that contain both context (around my clients’ services/solutions/vertical) and intent (keywords matching to search terms including “software”, “platform”, “solutions”, etc). To scale traffic, I’ve created a separate campaign that bids on keywords that contain just the contextual terms, but not the software-intent, with lower (manual) bids, using negative keywords to appropriately filter traffic. Considering splitting out your campaigns/ad groups by high-intent vs low-intent keywords, with budget given to higher performers.
  • Example: Let’s say your client offers a software for enterprise businesses to manage their cybersecurity. A high-intent keyword would be something like “enterprise cybersecurity software”, whereas a low-intent keyword would be just “enterprise cybersecurity”. We still require the user to use “enterprise cybersecurity” in some context, but that short-tail keyword does not require any specific intent like looking for a third-party tool/platform.

The keyword “enterprise cybersecurity software” will likely be significantly more expensive, and likely lower search volume/impressions, but has a clear, higher intent. The shorter-tail keyword will get you a larger number of impressions, but has a higher likelihood of leading to potentially lower-quality searches and clicks. I’d recommend starting out with trying to capture the high-intent searches first, but when you’re looking to scale, that’s where I’d add in the low-intent keywords, but separated into their own campaign, or at least a separate ad group.

On average, you spend a good amount of money on Google Ads, but still not worth the money results. So, spending the money without having the proper knowledge is a waste! And spending money with no results hurts, right? Don’t worry! We will tell you how you can get the value of your money. We will discuss tips and tricks to improve your Goggle Ads conversion rates.

Follow the ways below to improve your Google Ads Conversion Rates:

• Lead With an Attractive Offer or Value

The book cover is the Book’s first impression. And, you might have heard- “don’t judge a book by its cover”. Well, that’s exactly what we all do. We take a look at the book cover if it doesn’t please our eyes, we move on to the next.

Similarly, the headline is the first impression of your content. If it doesn’t please the eyes of your visitor, he/she won’t take an action on it. Hence, use some catchy phrases to create an attractive headline that will lead your content.

• Refine your CTAs

You need to tell your visitors what to do, otherwise, they won’t turn act! Yes, that’s true! It’s you who have to direct your website to take an action by generating a need for it.

Studies show that the most used CTAs by top-notch brands are- “get”, “buy”, and “shop”. Phrases like these, create an urge to take action, and that’s what improves your conversion rate.

• Boost your CTRs

Create content copy that can convince a reader to click right through your product. Write blogs or Ad copies that can convince your visitors to click. And for this, understand your audience. Convince them that they are missing something big and your product can fulfill that crack.

Don’t try to hurry them up to buy your product. Remember, in this step you just have to convince them to walk through your content and not buy your product. Use soft tone phrases like “get a quote”, “get more details”, etc.

• Align your Ad with an Accurate Landing Page

The general mistake we do sometimes is not checking up on our landing page. Whether we aligned our ad to the right landing page or not! Or, is the ad redirecting to the correct landing page or order! If you won’t do this right, you can lose a large audience.

For example, Your ad is about American diamond earrings, but the ad is aligned to a bangles landing page. This is not fulfilling the purpose of your Ad, and you will lose your potential customer here only.

Create a landing page for every segment and align them with the Ad properly.

• Work on your Quality Score

When you create or run a Google Ad, your Ad gets a ranking which is called Quality Score. This score is given based on the performance of your product. How much your Ad is impacting the audience, how it is performing in the market, how effective it is, and what value it’s giving out!

All these factors decide your Ad’s quality score.

According to studies, the more the quality score the lesser the overall CTR cost. This quality score can be improved by three factors- the landing page, the CTR, and Ad relevance.

• Don’t Miss out on your Social Proofs

People trust reviews. They are afraid of being the first one to use or buy anything. They look for the assurance and experience of others to rely on! Hence, putting out your social proofs is very important. Include the brands or firms you have worked with, put their reviews, and that will make you look authentic and preferred. This will attract and convince the visitors to be your potential loyal customers.

• Step-On your Competitors

Sometimes, not getting enough conversions via Google can be a targeting issue. And to sort that, you should focus on the audience’s intent. Like, what they are looking to buy, what is their need, etc. And, a clear way of doing this is branded keyword search.

Branded keyword search is when a person looks for something brand specific.

For example: “dresses on Myntra”, “Sports shoes on Reebok”, etc.

When a person will search the above keywords, he/she will not only get the results for the brands above but the Ads of alternatives too. That’s what stepping on your competitors is! Run your Ads on the brand keyword research of other competitive brands. I know, it’s something that sounds illegal but isn’t!

• Enhance your Landing Page

Optimizing Ads is not just enough! You need to work on everything else. One of the major things is the landing page. By having visitors directed to your landing page, you will have a task to fulfill what a visitor is expecting from you. Your landing page should have all the information needed in an organized manner. Don’t fill it heavily, but keep it on point.

Put product videos or video testimonials of the product or service, they tend to have greater chances to hook your visitors. And, the videos can help you better with conversion rates.

• Run Mobile-Friendly Ads

With the world going mobile, it’s important that you run mobile-friendly Ads. Keep the dimensions of your posters or Ad copies that can fit a mobile screen efficiently. Make it easy to access for the visitors. The only-desktop specific Ads will not look good on the mobile screen, and you might lose a great set of audience as most people access things through their mobiles.

Hence, move with the trend.

• Use Remarketing

We often forget how important remarketing is! Many times, a customer leaves the product in the cart or wishlist and forgets about it! Remarketing can help you catch back such customers. Look for Ads that performed great and are older. Run then again, they will lead your old visitors as well as create new leads as well.

Google Ads can be a whooping asset to convert your visitors into customers. You just need to do things right! If you will implement the above tips in the right manner the Google Ads conversion rate will definitely go up!

If anyone of you bright people has more tips to add, please feel free to add your opinions and suggestions. It’s always great to learn.

Read More: Conversion Rate Optimization Services

Another way to get good quality score on your ads these days is to write really awkward headlines that include the keywords, and then pinning any discounts. Kinda sucks but it’s been working better for me than traditional CTAs.

Quiz1: Jim Has Created A Google Search Ad With A Bid Of $5. Two Other Advertisers In An Auction Have Bids Of $2.50 And $2. How Much Would Jim Pay For The First Spot In The Auction?

Answer1: $2.51

Quiz2: True Or False? Google Audiences Are Updated On Every Impression, So Advertisers Can Reach Only The Most Relevant Consumers On YouTube Answer.

Answer2: True

Quiz3: On which social network should you share content most frequently? Correct Answer

Answer3: Twitter

Quiz4: You Want To Find New, High-Value Customers Using Their Data. Which Audience Solution Should You Use

Answer4: Similar Audiences

Meaning of key terms used in this blog:

Avg CPA: The average amount you’ve been charged for a conversion from your ad. Average cost per action (CPA) is calculated by dividing the total cost of conversions by the total number of conversions. 

  • For example, if your ad receives 2 conversions, one costing $2.00 and one costing $4.00, your average CPA for those conversions is $3.00.
  • Average CPA is based on your actual CPA (the actual amount you’re charged for a conversion from your ad), which might be different than your target CPA (the amount you’ve set as your desired average CPA if using Target CPA bidding).
  • Use performance targets to set an average CPA target for all campaign in a campaign group.

Avg CPT: This is the maximum amount you’re willing to pay for a tap on your ad.

Your default max CPT bid applies across all keywords in your ad group unless you specify a max CPT bid at the keyword level.

When calculating the amount of your max CPT bid:

  1. Decide what amount you can afford to spend on a new customer or action. Let’s say it’s $2.50 (U.S.).
  2. Estimate the percentage of customers who tap your ad and who you think will download your app or take your desired action. In this case, you estimate 40%.
  3. Calculate what you can afford to pay up to 40% of $2.50 (U.S.) — or $1.00 (U.S.) — for each tap. Therefore, set your starting default maximum CPT bid to $1.00 (U.S.).

Avg CPM: Average cost-per-thousand-impressions (CPM) is the average amount you pay per one thousand ad impressions on the App Store.

CR: The conversion rate (CR) is the total number of installs received within a period divided by total number of taps within the same period.

Dimensions: A dimension is an element of your Apple Search Ads campaign that can be included in a custom report. For example, campaign ID or CPT bid. Dimensions appear as rows in your custom reports.

Impression Share: The share of impressions your ad(s) received from the total impressions served on the same search terms or keywords, in the same countries and regions. Impression share is displayed as a percentage range, such as 0-10%, 11-20%, and so on. This metric is only available in predefined Impression Share custom reports and on the Recommendations page.

Impressions: The number of times your ad appeared in App Store search results within the reporting time period.

Installs: The total number of conversions from new downloads and redownloads resulting from an ad within the reporting period. Apple Search Ads installs are attributed within a 30-day tap-through window. Note that total installs may not match totals of LAT Off and LAT On installs, as additional downloads may come from customers using iOS 14 or later.

LAT Off Installs: Downloads from users who are using iOS 13 or earlier and have not enabled Limit Ad Tracking (LAT) on their device.

LAT On Installs: Downloads from users who are using iOS 13 or earlier and have enabled Limit Ad Tracking (LAT) on their device.

Match Source: This identifies whether your impression was the result of Search Match or a bidded keyword.

New Downloads: These represent app downloads from new users who have never before downloaded your app.

Rank: How your app ranks in terms of impression share compared to other apps in the same countries and regions. Rank is displayed as numbers from 1 to 5 or >5, with 1 being the highest rank. This metric is only available in predefined Impression Share reports and on the Recommendations page.

Redownloads: Redownloads occur when a user downloads your app, deletes it, and downloads the same app again following a tap on an ad on the App Store, or downloads the same app on an additional device.

Search Popularity: The popularity of a keyword, based on App Store searches. Search popularity is displayed as numbers from 1 to 5, with 5 being the most popular.

Search Term: Search terms are keywords and phrases that people have used to find the particular type of app they’re looking for.

Spend: The sum of the cost of each customer tap on your ad over the period of time set for your reporting.

Taps: The number of times your ad was tapped by users within the reporting time period.

TTR: The tap-through rate (TTR) is the number of times your ad was tapped by customers divided by the total impressions your ad received.

Keywords: Keywords are relevant words or terms someone may use when searching for an app like yours on the App Store. With Apple Search Ads Advanced, you bid on keywords to trigger and include your ad within relevant App Store search results — so when an App Store customer types in a search query that uses one of your keywords, your ad could appear.

Apple Search Ads knows a lot about your app and its genre, and will provide a list of keyword recommendations to save you time when you add keywords to a search results ad group. You can also add keywords of your own, and Apple Search Ads will suggest a further set of keywords related to the ones you’ve provided. To add any of them to your ad group, simply click the plus sign next to them.

I’ve managed +$10M in paid media over the last 8 years. Here are a few “less mainstream” FREE tools/websites/extensions I use. Hope this helps!

1. Adveronix

Adveronix is a handy Google Sheets add-on that allows you to export data from Facebook Ads, Google Ads, or any other channel automatically into a spreadsheet daily. You can then connect this spreadsheet to Google Data Studio and have a free connector for most media channels.

2. Polymer Search

Polymer Search has been one of my latest finds and a beneficial tool for creative analysis (and a few other things). For example, I usually test new creatives on Facebook Ads using dynamic creative testing campaigns.

I can then simply export my Facebook Ads data into a spreadsheet, connect it to Polymer Search, and immediately see which creative elements are working the best and which ones aren’t. The Auto-Explainer tool uses AI to immediately sort “Above Average” and “Below Average” creatives.

There’s also a ton more this tool can do – massive potential for media buyers.

3. BuiltWith

Before taking on any new client, one of my first steps is always to look at their website.

Suppose I don’t see anything like Klaviyo, Google Analytics, the Facebook Pixel, or any other marketing-related tech. In that case, this is usually a sign the client might be in a too early stage for me to help them out.

BuiltWith also helps you look into competitors and see what sorts of software they’re using.

4. Ad Creative Bank

The Ad Creative Bank is one of my top sources to find creative inspiration for new ads. It’s pretty simple: just look into the type of ads you want to create and browse through their well-organized library of great-looking ads.

5. Unicord Ads

Same as above, with the difference that you can sort by different industry/niche.

I find the ad quality slightly lower than Ad Creative Bank, but still a great library of ads to discover new brands and find inspiration for yourself!

6. One Click Extensions Manager

If you’re anything like me, your Google Chrome browser has +10 extensions cluttering your view. In short, One Click Extensions Manager allows you to organize all extensions into one single icon near your search tab, which makes everything feel a little more organized.

VidTao.com YouTube ads searchable by adspend over time. Perfect for modelling and competitive research.

And not forgetting:

Facebook Ad Library : Shouldn’t be overlooked.

Surferseo – it have free tier with a bit of tools
lsigraph.com – when you have no idea of keywords 

I’ve audited a dozen Facebook campaigns this month. Here’s the common mistakes I’m seeing people make:

Most of these mistakes were from ad accounts that are in the early testing stage and spending under $100/day. The majority of these mistakes are related to what NOT to do during the testing stage in an ad account. I had a few people get audits that were spending higher amounts ($500/day and above) but their situation was very specific and the solution I provided was also specific so it most likely wouldn’t add much value to share that scenario.

  1. Multiple interests and/or behaviors in one ad set (aka stacked audiences)

Doing this defeats the purpose of testing because you don’t know which interest is bringing in the results. Many other reasons to not do this during testing including you could have a great interest stacked with a bad one and that could skew the potential results. There are some instances where maybe it would be okay to have 2 stacked interests if the audiences are very small, but what I was seeing people do often is stack over 10 interests and behaviors into a single ad set.

2. Using CBO (campaign budget optimization) too early

CBO is not recommended for testing stage in Facebook ads. I’ve seen a couple of people do fine with CBO for testing but it logically doesn’t make sense because you don’t have much control over the budget allocation. This is why ad set budget is better for testing because when you want to put $20/day into one and set and $20/day into another, you know that the test is even. CBO will most likely not even out that budget. Even with setting ad set budget minimums and all of those constraints, which is sort of redundant. Facebook will recommend doing CBO by giving you messages inside of the ads manager but most of what Facebook says in their ads manager is not based off your current situation. They don’t know that you are in a testing phase and don’t have enough data to do a CBO, they just see that you are trying to spend a certain amount per day and they recommend CBO. Facebook’s ad manager isn’t smart enough to say “I see you are testing headline combinations – you should switch to ad set budget” or “I see you are trying to scale your store – you should use a CBO campaign”. You should use CBO once you’ve properly tested at least 4 audiences with ad set budget optimization.

3. Creating Lookalike audiences with low-quality data as a hail Mary

Yes, lookalike audiences are pretty neat. When you don’t have enough purchases, there are other source data pools that you can create them with. Video views, website traffic, page engagement, etc. The problem is you are pretty much creating a lookalike audience based on people who DON’T buy. Especially if you don’t have anyone buying your product. There is probably something wrong with your targeting as it is and you need to stick to interest targeting and optimizing for purchase conversions. I’ve seen people run a traffic campaign, get a few hundred clicks, and zero sales. This is because you are getting very low-quality traffic from Facebook and creating a lookalike is just going to find more people similar to that low-quality data. If you have a sort of “niche product” and you think that you can’t target them based on interests then you are not thinking outside of the box enough to find interests to test (more on finding the right interests in a later section).

4. Spreading too little per ad set and running multiple ad sets (I’ve seen as little as $3/day budgets)

For the campaigns that I audited, I gave them each a different recommended daily spend per ad set depending on their budget, niche, etc. so I don’t want to say that you should spend X amount per ad set, but $3/day is way too low. If you have a small budget, then you are better off testing less and spending more per ad set. So if you are doing $3/day to over 10 different ad sets to try and test 10 different audiences, you are going to get better data from spreading that same amount across 2-3 different audiences.

5. Interests narrowing and exclusions

I’ve seen some exclusions that make sense like excluding AliBaba and dropshipping whenever they were getting comments on the ads, but I’ve seen this done where the audience they were targeting needed to have interest in fashion AND apparel. Doing this is trying to target better than Facebook which is usually not a good idea to do unless you’ve tested both audiences on their own and if they are different categories of interests (music taste w/ hobby, industry interest w/ behavior targeting, etc.). At a testing stage this will cause CPM to be higher than needed.

6. Trying to target high-income people

This is on par with the previous mistake, but I wanted to make this its own blurb. Just because someone has a lot of money doesn’t mean they are going to shop at your store. You aren’t going to have better luck targeting the top 10% of zip codes based on income for your $20 sunglasses. Higher income people resonate better with name brand products that have credibility behind them so you would probably need to build up credibility, stellar branding, and high-quality products before attempting to target high-income people on Facebook.

7. Targeting interests that are too obvious

Your target demographic has many layers to their personality and social media behavior. When you sell a certain product and you only target the interest that is literally named the same thing that your product is, then you are limiting yourself to interests that your competition is probably targeting as well. Some of the best interests I’ve ran ads towards with Facebook ads are two or three degrees of separation from the product. I’ve sold supplements that were geared towards people who engage in certain activity, so instead of just targeting “supplement” I targeted “activity” interests. I’ve targeted music interests based on certain elements of a product that I’ve ran ads for, and the product wasn’t a music related product at all but people who liked that product typically listened to a certain type of music as well.

8. Focusing on cheap link clicks instead of purchases

The amount that you pay for a click does not matter if you are getting little to no sales. You want to pay more for expensive clicks from people that Facebook deems as likely to make a purchase or whatever action you are wanting them to do. I’ve audited a few campaigns where they ran two ad sets and the owner of the ad account concluded that “Ad Set 1” was better than “Ad Set 2” because it got clicks for half the cost. But neither of them got a sale, so neither is better than the other. Or I’ve audited campaigns where the store owner says “this ad did well, it got over 1,000 clicks” but it got zero sales. Typically this was done with an improper campaign setup anyway so none of those clicks were going to convert either way.

9. Not testing ads/audiences long enough

One campaign that I audited turned off an ad after just a few hours of letting it run because Facebook was spending the money too fast. I recommend letting a test run for at least 5 days. If the ad is setup properly then you will have some good days, some bad days, and some okay days. I’ve seen many times where the best day ever is right after a very bad day. Know that a bad day is still data for Facebook because it is learning what NOT to do.

10. Hanging on to an audience that stopped working

Audiences, ads, and campaigns can eventually stop working after a certain amount of time, regardless of how well they worked at one time. There are many reasons for this to happen which would be a whole post on its own, but if you’re struggling to get an audience to work then just move on and try again in the future. I audited a campaign that was running ads to a specific lookalike audience that was setup very odd and it wasn’t producing them very good results recently anyway, so I obviously recommended that they turn it off and try setting it up a different way that would be more likely to work. The user did not take the advice because that was their best performing audience many months ago. This is why you want to be diverse with your targeting so that when an audience stops working, you don’t cling onto it like overly attached girlfriend meme.

11. Setting up a funnel that is filled with low quality data

Running traffic campaigns is just going to get you a ton of traffic that is most likely not going to turn into a purchase. You are more likely to get a purchase from 100 high quality clicks than you would 1,000 low-quality clicks. Traffic campaigns give you the absolute bottom of the barrel traffic that Facebook has to offer. What I see people do is setup a funnel with traffic campaigns at the top, and retargeting at the bottom with a campaign optimized for conversions. This makes sense in theory, but in practice you are just continuing to retarget the low-quality traffic. And it just costs too much money to spend going after those low-quality clicks over and over again when you could just go straight for the purchase conversions campaign traffic. Those are the ones that are more likely to purchase without needing to see the ads 5 times. There are a lot of impulse buyers within those campaigns. Do this even if your store has zero purchases.

12. Worrying about 4 steps ahead when they are still on step 1

“I’m spending $50/day but what should I expect when I am scaling and spending $1,000/day?” That is going to be different for everybody but this is one of those situations where they are trying to solve a problem that hasn’t even happened yet and you’re essentially taking focus away from the step you are at right now and projecting it into a future scenario that may or may not happen.

13. Thinking the cost per purchase that they got on their own is what they’ll continue to see

If you are doing things incorrectly with Facebook ads, then you should expect to see results that are not very good. It’s one thing to have a frame of mind like “I’m not getting good results on my own but I think they could be better” as compared to “I’ve been running ads for two weeks with little to no experience and I’m paying too much to get a customer so Facebook isn’t worth it”.

3 Lessons After Spending $350K Since iOS 14.5 Hit

1. Account Structure

For me, it feels as if Facebook likes to have the account even more structured than previously. I rarely ever now use Cost Caps because of the delayed sales coming in and generally tend to have an account structure like this:

1 – TOF Scaling Campaign

2 – TOF Testing Campaign

3 – MOF/BOF Campaign (Try combining MOF/BOF in 1 Campaign if possible)

All in all, I try to consolidate my spend into as few campaigns as possible, and I still leverage Broad Targeting (No targeting at all). It has been working quite well for me on most accounts.

If you’re spending less than $500/d, I’d say Look a likes also are impacted. They are not getting as many data points as they were getting before, and therefore generally now have a lower value than before.

If you’re at the sub $500/d range, try Big Interests or just Broad Targeting if your look a like audiences are struggling.

2. Retargeting

Retargeting has changed a lot for me.

Especially at lower budget accounts, I broadened that retargeting window. Where I previously had 14D ATC, it is now 60 days. I also often combine multiple retargeting audiences, such as Add to Cart and View Content.

All in all, I try to have as few exclusions as possible since even if you e.g., exclude purchasers, those people see the ads. I’ve noticed this because a lot of new TOF Ads are getting comments from people who bought within the last 1-2 weeks from the brand.

So, with exclusions not being as effective, you want to prevent overlaps in retargeting audiences, which is why I consolidate.

3. Patience

Overall, tracking purchases has never been more challenging, and it feels to me as if Facebook is only tracking 40%-60% of all purchases from Facebook. This is why it is now super essential to look at your overall ROAS (Revenue / Ad Spend)

If your revenue increases when you scale up, but your ads manager is not showing up any purchases, they most likely come from your ads (Unless you’re running a big email promotion, got featured on a big magazine, or something like that, of course)

Purchases tend to show up in bulk for me in the ads manager after a few days, so don’t freak out if you see a low ROAS on your side, as long as the revenue is there. Make fewer day-to-day changes and keep an eye on results for a longer time.

Insights From Doing $150K+ a Day in Revenue on Facebook Ads

March 2022 Update on this: For those just seeing this now, Facebook has become significantly harder, but the general strategy here still works. And that’s testing LOTS of creatives, not fancy hacks. We’ve since started spending over $10K+ per day on Tik Tok as well and it’s doing WAY better than facebook for us.

What’s up everyone! Just wanted to drop in and share some insights into what it takes to manage $20K-60K+ a day in spend on facebook in DTC ecom. (I’ve done $150K-250K revenue days on facebook, personal best in terms of ROAS was a bit over $200K in revenue at about $60K in spend on a single one of our brands, not including black friday which was insane)

Just a caveat here, how I run ads might not work for you, especially if you’re super low in spend. Different brands require different strategies, and most importantly, my own strategies are constantly developing. How I test and scale on facebook now is completely different than how it was 6 months ago for example. Also another caveat, some of the tactics we use are really only necessary at a super high level as you’ll see here, if you’re a mom and pop shop they won’t be necessary (for example running multiple facebook pages which I’ll get into).

When I first got started in online advertising, I was always searching for the ‘perfect’ way to run ads through shitty gurus, and honestly there is NO perfect way. I recommend learning the basics and devising your own strategy, which is what I ended up doing. Another thing, at lowish spend (less than $5K-10K+ a day I would say, you’re usually going to get decent fluctuations in performance day to day on facebook. Consistency on facebook comes from high spend and feeding the algo as many data points as possible.

I’m fortunate enough to be in a network of the most elite DTC brand owners so I’ve accumulated a ton of knowledge about what works at this level of scale, but this game still requires constant learning! This isn’t set in stone but its just what I’ve found works for me, so here it goes.

Naming Conventions

Consistent naming conventions are super important for analyzing data in ad reporting at a glance. You can figure out your own but here are mine if you’re looking for a quick idea:

Campaign Names:

TOF: Prospecting (Top of Funnel)

BOF: Retargeting

T: Testing

S: Scaling

SS: Super Scaling (these campaigns are typically $2K-10K daily budget)

X.XX numbers at the end of campaign names or ad sets names: date of launch, i.e. 5.15 is May 15

Campaign name example: SS – TOF – CBO – Beast – 6.05

Ad set names:

Targeting – Countries – Age – Placement – Attribution – Date of launch

E.g. Broad – US + CA – 18+ – Auto – 7dc1dv – 3.15

e.g. INT – Theme parks – US – 18+ – Auto – 7dc – 3.24

E.g. LLA – Lookalike (US, 10%) – 2+ Purchase 180 Days – US – 18+ – Auto – 7dc – 2.16

Ad Names:

Brand – FB Page – video/image number – ad copy number – lander/advertorial number – post ID – date of launch

E.g.

PP – vv100 – adc49 – lp3 – 123434341834813 – 8.08

PP – p3 – vv100 – adc72 – lp53 – 123434341834813 – 8.08

 
Account Structure – Testing (Post ID’s)
Testing Campaigns (always running):
T – TOF – ABO – Interest Testing – 5.15
  • Testing random interests found in facebook audience insights, similar interests to winning interests, etc using best 2-4 post ID’s to “feed” the pixel data

  • Audience insights is phasing out so this might not be useful in the future

  • Small budget ad sets of $30-50

  • Can dupe winners out 2x in same campaign at slightly higher budget of $50-60

I do this with lookalikes too but I do not run interests or lookalikes with any real budget whatsoever nowadays. I literally run all creative testing and scaling with completely wide open targeting

 
T – 1 – Creative – TOF – ABO – Broad – 2.18
  • Phase 1 testing campaign

  • All new videos/images get launched here

  • I like to do them in batches of 3-4 new videos/images at a time in a single broad ad set with the budget set to 1.5-2x AOV

  • Broad targeting (US + CA, 18+ so we determine how effective the creatives truly are without being skewed by very good lookalikes/interests etc. In the case of more niche products, can try broad interest targeting, like interest ‘fitness’ if selling fitness apparel or ‘coffee’ if selling coffee product, with detailed targeting expansion checked ON)

  • Using best copy variation, best offer, best lander/advertorial

  • Winners graduate to testing phase 2

 
T – 2 – Ad Copy – TOF – ABO – Broad – 2.19
  • Phase 2 testing campaign

  • Take each winning winning creative from phase 1 and put it into its own broad ad set in this second campaign, testing 4-5 different ad copy angles (separate ad), still using best lander

  • E.g. ad set naming convention:

    • img192 – Broad – US + CA – 18+ – Auto – 7dc – 3.02

      • Means img192 is the constant image across the 4 ads, with 4 different copy

  • Winning ad copy variants graduates to step 3

 
T – 3 – Lander – TOF – ABO – Broad – 2.19
  • Phase 3 testing campaign

  • Here’s what differentiates us from most ecom brands. We test a TON of advertorials, like 3-5 new advertorials a month focused on different angles. Seriously at scale this is what separates winners from losers. In this campaign I’ll also test running direct to our top sales lander as well as one of the ads. We NEVER run direct to a shopify store, we have a subdomain with dedicated landing pages/advertorials that we run to with custom checkout that converts MUCH higher and has a much higher AOV with it’s upsells.

  • Take winning video/images + copy combo and test 3-5 different landers/advertorials as mentioned

  • E.g. ad set naming convention:

    • vv65 – adc220 – Broad – US + CA – 18+ – Auto – 7dc – 3.21

      • Denotes that vv5 and adc220 were the winning variables from previous test, now testing 3-4 different landers/adverts with these two winning combos

  • By now the creative has run through 3 different testing campaigns/phases. If still performing, it can be moved to bigger budget testing to see its scaling potential

  • Can also be moved to optional step 4 for generating more winning post ID’s

  • Also optional: Winner of this test can be moved back to step 2, testing more ad copy focused around the advertorial if a specific advertorial won during this test

T – 4 – Page – TOF – ABO – Broad – 2.19
  • Optional step 4

  • This is another tactic that I don’t see many bigger brands using. In this campaign I’ll take the winning ads from the previous steps, and re-create them on 3-4 different facebook pages that aren’t our main brand page. These are ‘blog’ style pages. For example the name of one of the pages if you own a furniture store might be “Home Decor Insider”. What you don’t want to do is create fake influencer pages like “Katie’s Home’s” or something like that as that’s not allowed.

  • Take the winning video/image + copy + lander/advert combo and test it on 3-4 different facebook pages to generate more winning post ID’s as mentioned.

  • The point of this is multi-fold:

    • Generate as many winning post id’s as possible because at scale you’ll need them

    • Distributes negative feedback score away from your main brand page (negative feedback can become an issue at scale, especially last year with covid shipping delays)

    • Different pages perform differently in the auction, some page names may resonate with people more and get cheaper cpc’s and cpm’s.

As you can see here the point in all this testing is generating as many winning post ID’s as possible.

BPA – TOF – ABO – Broad – 2.19
  • BPA meaning best performing ads

  • This campaign is for testing all the winning post ID’s from steps 1-4 at higher budgets.

  • Like to do them in ad sets with batches of 2-4 ads

  • Also broad ad sets, but can also try with different LLA’s or broad interests

  • Budget 1.5-3x AOV, and scale it but dupe. I.e. start the ad set at $300, if doing well over the course of 3 days or so, dupe out at double $600. From here you’ll get a sense of how it does at higher budgets. Sometimes it can do very well in the smaller 1-4 step testing, but falls flat here. If it was getting decent metrics in testing, but falls flat here, you can try duplicating the ad set and trying it again, or testing with a couple different audiences.

DCT Testing (if applicable)
  • DCT seems to work better with lower CPA products, or requires a very high budget for higher CPA products

  • I haven’t had much success with dynamic creatives for testing, and especially now with the ios update facebook doesn’t show in breakdowns which creative variables are getting the purchases so they seem essentially worthless.

  • If i were to do creative testing for DCT I would do something like:

    • One broad ad set for each new video/image

    • $100-300 budget

    • 1x new video/image, 2 best copy + 1 new copy, 1 best headline + 1 new headline

  • Pull winning post ID’s out, follow testing steps 3-4 above to test different landers/adverts/offers/fb pages

  • What i DO like dynamic creative for lately is time sensitive sales, like black friday where I don’t have a ton of time to test stuff. What I usually do is toss in a ton of my existing winning videos/images/copy/headlines (I might just add a black friday sale specific line to the top of the ad copy) running to my best advertorial/lander and let it rip at about $1000 a day budget. If it does good after 1 day I’ll duplicate it out into a cost cap/bid cap at $5K-10K a day or whatever

CBO Angle testing:

This is a CBO with 5-7 ad sets, each ad set is a separate angle containing winning ads from the above campaigns, that get added to their respective angle ad set. Budget is about $1K per day for me. All ad sets wide open broad targeting

SCALING!!!

Here’s the fun part. My methods of scaling nowadays have evolved with what works on facebook. The good thing is with this level of spend I learn quickly what is or what does not work on facebook anymore so it keeps me current. I have a few different scaling campaign structures that I’m currently running simultaneously. This is what I’m finding works right now:

Scaling Campaign 1

Lowest cost CBO -> 1 ad set (completely Broad) -> Best 6-10 post ID’s from testing campaigns. I’ll add new post ID’s/turn off ads if performance is on a decline over a week period. I will increase the budget by 20-30% a day if performance has been consistently good over a 2-3 day period.

Scaling Campaign 2

Same as above, except this campaign is made up entirely of non-brand page post ID’s from the page testing campaigns

^ These campaigns are both often running at $2-5K+ a day

Scaling Campaign 3 – Bid Cap ABO

I duplicate the best ad sets 3x from the CBO angle testing campaign into a separate ABO campaign, each running at a different bid. Ad set one’s bid cap is set to target CPA + 25%. So if my target cpa for example is $50, the bid cap would be set to $62.5. Ad set two is set to +50% ($75) and ad set 3 is set to +100% ($99.99, I round down in this case as my theory is if i set the bid to $100, I’ll be put into a higher tiered auction pool and may get outbid, dont quote me on this lol)

I set budgets at about $1K-5K per ad set here. And because you can have one of these campaigns for each angle, you can see how quickly scale can build up here.

Scaling Campaign 4 – Cost Cap ABO
  • Same as above, but the cost caps for this campaign will be +15%, +25% and +50%

Scaling Campaign 5 – Cost Cap CBO
  • 4 completely broad ad sets duplicate of each other, all with the same cost cap. This campaign contains the best 6-12 post ID’s overall from all testing campaigns. You’ll have to play with the cost cap here to get it to spend properly. This campaign is generally a big one for me usually with a $10K daily budget. I’ll also have a minimum ad set spend of about 3-5x the CPA set for each ad set

The point in having so many scaling campaigns is multi-fold:

  • Prevents reliance on a single scaling campaign on poor days. For example one or two of these campaigns might do mediocre one day, but the rest are crushing and make up for it

  • Optimizes differently and hits different points in the auction by utilizing both CBO and ABO

If you want to go crazy you can also take these exact scaling campaigns and scale them across multiple accounts as well. For that $200K day I had $10K+ cost cap campaigns scaled across like 4 different accounts.

And that’s it! Like I said this is not end all be all of running ads, just what I’ve evolved to do after spending high budget day in and day out for single brands

The most important thing about scaling with this level of spend and what separates the brands who do great online and those who don’t is content. We’re testing about 10-15 NEW video ads per WEEK + variations of winning videos on top of that (different hooks for example)

Audience “hacking” is no longer really a thing and hasn’t been for a while. I don’t run any interests at scale for the most part and lookalikes I barely use nowadays either (they worked great last year up until Q3-Q4). literally just wide open 18+ targeting. broad targeting might not work as well if you have a super niche brand

It’s true that nowadays facebook has certainly become a lot more difficult. We aren’t spending as much on it compared to last year (though still a lot and it’s our primary DTC revenue driver still), we’re trying to crack other traffic sources to diversify for cold traffic, especially with Tik Tok, Youtube, GDN and Snapchat. Snap is spending about $3K-5K a day at so-so ROAS.

 

How to structure your entire Facebook ad campaign (From prospecting to retargeting)

Having a defined structure and strategy is essential to a successful Facebook ad campaign.

I run an ads agency and one of the biggest mistakes I see with Facebook ads is a complete lack of structure. Many business owners and advertisers treat Facebook ads like darts, throwing hail Mary’s at the board and hoping for a favorable outcome. This is especially apparent when it comes to scaling, I think this is what people struggle with most.

In this post I will give a complete overview of how to structure your Facebook ads, from TOF prospecting to BOF retargeting.

Quick disclaimer, this is just a general overview of strategy and structure. Every ad account should be approached differently and it’s important to tailor your strategy to your brand.

This is what it should look like from a birds-eye view:

TOF – 1 Testing Campaign & 1 Scaling Campaign

MOF- Retargeting Campaign for Soft Interest (Landing page view, video views etc)

BOF – Retargeting Campaign for Heavy Interest (ATC, IC etc)

BOF Post Purchase (Optional) – This is brand dependent and isn’t applicable for all. This is post-purchase retargeting.

TOF – Testing and Scaling

This stage of the funnel should ideally be split into two campaigns, it may require more with bigger accounts.

This entire stage of the funnel only involves cold audiences, a majority of your budget should be allocated to TOF.

  • Testing

The first campaign is the testing campaign. It’s important to test EVERYTHING. This campaign should be ABO and every ad set should be allocated an equal daily spend. Test audiences and creatives for 1 week, kill ad sets that aren’t performing, winning ad sets and and creatives will be moved to the scaling campaign.

It’s also possible to scale ad sets vertically in the testing campaign. However, be careful to not get overzealous as you risk sending the ad set back into learning. To scale vertically, slowly increase the ad set budget by 10%-20% every couple of days.

  • Scaling

All your winning ad sets from the testing campaign must be duplicated into the scaling campaign. Sometimes ad sets will perform vastly different when duplicated so this is why we also scale vertically in the testing campaign. Sometimes it may just be a matter of duplicating the ad set twice before it performs. This is a result of Facebook’s learning phase always being different.

Now, this campaign should ideally be CBO as your goal is to maximise results. You should still be introducing new ad sets from your testing campaign, some people even introduce new ad sets directly to the scaling campaign. At this stage of the funnel, keep an eye on frequency as you don’t want to risk audience fatigue. It’s important to keep introducing new creatives to combat audience fatigue.

The TOF campaign should include both cold interest audiences and cold LLA audiences. As I said, test everything. It’s also important to start with logical audiences. Once you start getting traction you can begin introducing some more obscure interests.

Your copy at this stage should also be problem/solution focused, you are selling your product at this stage.

MOF – Retargeting Soft Interest

This stage of the funnel will only be effective if your cold campaigns were optimised for purchases, otherwise, you will be wasting money retargeting low-quality audiences.

The targeting for this stage is simple. It’s important that you exclude audiences that you will be targeting later down the funnel, such as ATCs, ICs, and Purchases.

The copy is really important at this stage of the funnel. You have already somewhat sold them on the product, hence why they clicked. I’ve found that trust-building copy and creatives are effective. Customer reviews/testimonials can be leveraged to build trust with your audience and convince them that your product delivers on what it promises, or at least, has a real customer base. People like to follow the herd, convince them that the herd buys your product.

Some advertisers skip this stage of the funnel completely, or combine it with the bottom of funnel retargeting. This is ok, but I like structure and separating the campaigns is much more orderly. It also allows you to ensure copy and creative is consistent with the funnel stage.

BOF – Retargeting Heavy Interest

This is the campaign that should provide you with the best results in terms of ROAS and CPA. However, as the audience will be much smaller, the daily ad spend will be relatively low.

It’s important that you exclude the MOF audiences, as well as purchasers.

Creative and copy should involve a strong CTA. This audience has already been involved in the purchase process and thus, have shown strong interest in your product. We often use discount codes at this stage as a CTA.

You can also get creative with your copy. Remember, this audience already knows your brand and product.

BOF Post Purchase – Optional

This is only applicable for brands with multiple products for sale. Only a very small budget should be allocated to this campaign.

Again, this audience is already very familiar with your brand so use this to your advantage.

As mentioned in the beginning, this is just a basic structure and there are many variations. It’s important that you take your own situation into account when setting up your Facebook ads.

I hope this post has been helpful, it’s not as granular as my previous posts but I think it’s important that people understand how to structure an entire Facebook ad strategy.

 

Top 10 CPM’s most expensive/cheapest Facebook

Here are the top 10 most expensive CPM’s for February-March 2022:

Australia – $19.57

Denmark – $18.98

Norway – $18.19

United States – $17.26

Singapore – $15.43

Israel – $14.68

New Zealand – $14.23

United Kingdom – $12.40

Canada – $11.86

Sweden – $11.71

Here are the top 10 cheapest CPM’s for February-March 2022:

Uzbekistan – $0.06

Belarus – $0.09

Kyrgyzstan – $0.16

Tajikistan – $0.16

Turkmenistan – $0.21

Kazakhstan – $0.22

Guinea-Bissau – $0.41

India – $0.41

Azerbaijan – $0.42

Wallis and Futuna – $0.43

Your poor performing Facebook Ads is not as simple to fix as you probably think it is…

If you are experiencing poor results with your Facebook Ads and have a “quick fix” in mind, please read this post before you attempt to fix it.

When you create Facebook ad campaigns, you know that there are just so many different ways that it can be set up.

Like a dozen different campaign objectives… Many conversion optimization options… Hundreds (maybe thousands, idk) of interest you can target… Lookalike audiences… The different platforms you can place your ad on… Video vs. image… Square vs. rectangle… Long copy vs. short copy…

And the list goes on and on.

So whenever you launch a campaign on Facebook and it isn’t working after 5-7 days, you can see how many different things can be adjusted in an attempt to fix it.

I’ve worked on hundreds of ad campaigns on Facebook and have had thousands of conversations about Facebook ads with either my clients or with people who are needing help running their ads and they come to me for consulting or to have me personally launch and scale their ads properly. Sometimes they will tell me what they think is causing their issues and what they say ALWAYS falls into two categories. They either say “I have no idea” or they say that they think the fix is just one thing like “I just need better targeting” or “my ads don’t get enough likes” or “I’m just not sure how much my daily budget is, that’s my main problem”

And I’ve made the mistake of taking their word for it so when I dive into their ad account, I go in with the expectation of just making that easy fix and everything else in the ad account being setup properly. Just fix their targeting or budgeting and it’ll all be smooth sailing from here. Nope. There are always many more problems I see as I go in their ad strategy and setup.

I’m going to go a bit deep here… people often emulate this type of thinking with a lot of things in life that are big problems but think the solution is super simple. When people need to lose weight, they’ll say “If I could afford healthy food and a gym membership, I would be in great shape” but there are so many other problems like their consistency or workout routine… their opinion of what “healthy food” is could be inaccurate. Get them free unlimited healthy food and free gym membership and they’ll still be out of shape. And people think “if I had a million dollars, I would be happy with my life” but then they win the lottery and are still miserable.

Maybe there is some sort of psychological pattern that people do to themselves to feel less overwhelmed with their problems? I’m not an expert in that area!

Here’s the point I’m trying to make: the fix for your low performing ads is MUCH more than just one single small little fix. It’s either a lot more little fixes or one big fix.

If I dive into your Facebook ad account and I see horrible campaign structure, improper budgeting, confusing ads, and terrible targeting… turning on “target people connected to Wi-Fi” is NOT going to fix your campaign. Find the “perfect interest” to target won’t fix it either. But this is the type of thinking that people have that I talk to with broken ads.

When it comes to fixing broken Facebook campaigns, all of the solutions fall into two main categories, each having their own criteria that MUST be met.

The categories

  1. Campaign structure

  2. Product (or offer)

The criteria that both must be met for a winning ad campaign

  1. The campaign structure must cater to what Facebook prefers

  2. The product must cater to what your target demographic prefers

Some things do overlap a little bit into both categories. For example, the ad design needs to be social media friendly so that Facebook doesn’t throttle your reach with high CPM and your ad must cater to your target demographic by being easy for them to understand what you are selling. So that’s a little bit of both Facebook and target demographic in that situation. And then in the scenario where your product can’t go against Facebook’s ad policy is clearly something that must cater to Facebook’s preferences.

I could write a book going over all of the things that fall into these categories that will fix a failing ad campaign, but here are a few real examples I’ve seen inside of ad campaigns over the last few weeks.

1. Budget spread too thin among ad sets and/or ads

An ad account I started working on last week was using dynamic ads with as many ad variations as possible. Maxed out number of creatives, maxed out number of ad copy, and headlines. The amount that they were spending on this dynamic ad was about $100 per day, however because they had so many dynamic options, they basically had like 200+ ads in one ad set. Put $100/day into that and you’ve got 50 cents per day per ad. That’s not nearly enough budget to give Facebook with any ad. If you are going to use dynamic ads or multiple ads in one ad set, try to give each ad a range of $5-15 per day.

2. Ad talks more about the business or brand instead of the product

This one broke the rule of having the ad and product cater to the target demographic. Especially for newly established brands, your best target demographic are impulse buyers. They don’t typically care about how long you’ve been in business or how your product is made. Now I’m not saying you should never put that into an ad, but I would recommend talking about the product or special offer at the top of the text in the ad and in the headline which is the first thing that a viewer will read.

3. Targeting is far too restricted and narrowed down

A rule of thumb when it comes to Facebook’s targeting is you want to make it easy for Facebook to find who it is you are looking for. When you add too many constraints on your targeting, it requires Facebook to work extra hard on figuring out who to put your ad in front of and Facebook makes you pay for that extra work it has to do by raising your CPM substantially. The ad account I worked on had 5 interests in the first level that were entertainment based, then narrowed down to 3 more interests that were hobby based that must match, and then finally was narrowed down again towards engaged shoppers. So when Facebook finds someone in that first level of audience, it needs to check if they match the second level, and then the third as well. For best results, just test out one or two interests in each ad set starting out.

4. Creative is not social media friendly

Your ad doesn’t need to be “good” as much as it needs to be designed in a way that Facebook prefers so that it shows the ad to a lot of people. This is the first warning sign that I encounter when I look at an ad in the ads library for a Facebook page. I was on the phone with someone consulting them on their Facebook strategy and they said “My biggest problem is the targeting. I have no idea what interest is the right one,” but then I look at their ads in the ad library and it doesn’t matter who they target with that ad, Facebook doesn’t like the ad. Too much text on the ad and low quality image is the common one I see for this one. The 20% text rule is no longer in effect, however if you put too much text on an ad it will throttle the reach and increase the CPMs (usually by a TON to where it is nearly impossible to counter) If you have some big bold text you want to put on the creative, just put that in the headline of the ad instead.

And there are many more errors that I have witnessed but I’m sure that a lot of people who read this post are making similar errors to just the few examples I’ve mentioned and I hope this can help them fix their ad account at least a little bit.

 

How to leave less money on the table with your FB ads

I’ve audited hundreds of ad campaigns, from huge organization like Greenpeace to startup drop shippers.

There are 9 areas I pay attention to when doing these audits:

  1. Structure

  2. Objectives

  3. Targeting

  4. Placements

  5. Customer Avatar / Personas

  6. Copywriting

  7. Visuals

  8. Landing Pages

  9. Funnel / Strategy

Here are the most common mistakes I see businesses make with each of those Pillars, that hold them back from the ROI they need if they are to grow.

Pillar 1 – Structure

Biggest Mistake: Not using clear naming protocols.

Explanation: This is possibly the least sexy area of FB ads, but if you don’t name your campaigns, ad sets and ads consistently, you end up with unclear names for things and everything takes longer when trying to find your way around your account, look back at results, or compare performance of two campaigns/ad sets. Look at this example…How to avoid making the same mistake: The naming convention I recommend is as follows:Campaign:Objective | description | date i.e. “Guide download | Overwhelm | Jun 2019”
Ad Set:Description | date | testing variable i.e.ad set 1: “Overwhelm | Jun 2019 | email lookalike” ad set 2: “Overwhelm | Jun 2019 | Interest: Moz”
Ads:Description | date | testing variable | creative variable i.e.ad 1: “Overwhelm | Jun 2019 | email LLA | H1C1V1“ ad 1: “Overwhelm | Jun 2019 | email LLA | H1C1V2“ (H= headline, C= ad copy, V= visual)

Pillar 2 – Objectives

Biggest Mistake: Not using the conversion objective

Explanation: I think this comes down to people not quite understanding how Facebook’s targeting and objectives work.

Here’s an (over-simplified for the sake of clarity) overview:

There are two main factors that affect who sees your ads, your targeting and your objective. By choosing targeting options, you narrow down your potential audience from ‘Everyone who uses Facebook’ down to (for example) ‘people who like pages related to surfing’ or ‘women over 40 within 10 miles of my business’.

Then Facebook takes that group of people, and ranks them in order of ‘most likely to complete the objective you’ve chosen’ based on the huge amount of historical data they have on everyone. This means that if you’ve selected an audience of 100’000 people, and chosen the ‘traffic’ objective, then Facebook will decide who of those 100’000 people are most likely to click your ad (based on things like how relevant they think this ad is to them, and how often they’ve historically clicked on things like this), and show it to them in rough order, from person 1 to person 100’000.If you chose the ‘video views’ objective, then Facebook will decide who of those 100’000 people are most likely to watch your video (based on things like how often they watch videos like yours), and show it to them in rough order, from person 1 to person 100’000.So…

By choosing different objectives – your ads will show to different groups of people within your audience. This isn’t a big deal if you have an audience of 30’000 because your ad will likely show to all of them in a short timeframe, but if you’ve got an audience of 2 million people, then you want to show it to the people most likely to do the thing you want. And typically, when you’re sending someone to your website, it’s because you want them to do something when they’re there – i.e. download a guide, or buy a product, or book an appointment. So by not choosing the ‘conversion’ you are likely getting worse results than you could be.

How to avoid making the same mistake:

Read through the following paragraphs to learn when to use the most common objectives:

Traffic – Use this when you’re sending people to your website but don’t have an action for them to do when they get there, or can’t track what they do when they get there – I.e. a blog post/ press release/ new thing you’re doing, or when promoting third party content (where you don’t have access to a tracking pixel on the end site).

Conversions – Use this when you want to send someone to your website AND have them do an action – i.e. getting them to buy something, sign up for an event, or download your awesome guide.

  • Within conversions – you can set up different objectives. Best practice is to start with the end goal you want, i.e. purchases, and then move back along the customer journey (purchase > initiate checkout > add to basket > view content > view landing page) if you don’t get results.

Page Post Engagement (PPE) (This is the same as boosting a post) – Use this when you want to get comments/likes/shares on a post – i.e. content that doesn’t require an action/ for a competition/ getting people to tag their friends. These are also great when you have a messenger bot setup, triggered by a comment.

Video views – If you’re building an audience of people to retarget, then video is likely to be the cheapest route, because you can track anyone who watches 3 seconds or more of your video. Also if you want to get cheap awareness of something that doesn’t include a direct action you want someone to take.

Lead Generation (Lead Forms) – These seem undervalued by many advertisers, probably because getting the leads from the form into anywhere useful like your CRM, isn’t as easy as it should be* – but if you want to get people to sign up for something, or give you their details, and you they are already qualified, then Lead forms can work great. For local businesses who want leads (i.e. gyms or cleaners), lead forms consistently get me the best results. * Use Zapier to easily get the info people fill in sent to your email/phone instantly.

Reach – Using the reach objective is telling Facebook to not worry about any end objective, but rather to just show your ads to everyone in your chosen audience. This is useful when you’re targeting a small number of people (e.g. retargeting the 2000 people who’ve watched a specific video of yours), or if targeting a small geographical area (e.g the 5km radius around your business) 

Brand Awareness – An underused objective – presumably because it doesn’t produce a very measurable end ‘result’ but brand awareness ads are actually very powerful. Facebook will choose who to show your ads to based on who is likely to remember your brand in a couple of days time. This means it can be very useful for ads going out to a broad cold audience, with a view to retargeting them. HOWEVER – I’ve also found it to be one of the most profitable objectives to use for retargeting in multi-tiered campaigns (i.e people who’ve visited your website but not signed up for your course yet)

Pillar 3 – Targeting

Biggest Mistake (Non-Local): Ignoring custom audiences. Explanation: The following order of targeting options are (broadly speaking) the preferred, because they go from warmest to coldest:

  1. Custom audiences

  2. Lookalike Audiences (LLA’s)

  3. Interest targeting

  4. Location

  5. Age & Gender

And obviously, the warmer the audience, the more likely they are to buy from you.

Yet I see a lot of businesses just constantly pumping out ads to a cold audience, and ignoring the people who have already watched their videos / been to their website / added a product to their cart. In – businesses, a retargeting campaign, going out to people who have added something to cart but not bought is the highest ROI campaign 9 times out of 10, and it’s the same no matter what you sell.

How to avoid making the same mistake: Plan out a proper customer journey. What are all the different steps that someone goes through between first coming across your business and becoming a long-term customer?

  • Downloading a guide and getting on your email list?

  • Watch a video of you explaining how your process is ideal for them?

  • Browsing your website?

  • Scheduling a call with you personally?

And then create ads for each relevant stage to help guide them along that path. Remember, as they become more familiar with you, you will also speak to them differently.

Pillar 4 – Placements

Biggest mistake: Wasting money on the audience network.

Explanation: There are over a dozen different places where your ads can show. But not all of them tend to be equally effective, and Facebook will often push a high amount of traffic to the audience network because it is less saturated. The audience network is a huge number of websites and apps where Facebook also show ads. There are times and places when the audience network is great – I’ve seen it work well for link clicks to blog posts, and as part of a retargeting campaign, allowing you to ‘be everywhere’, but too often it’s not the right choice.

In recent times (since sometime in 2019) Facebook’s ability to choose the right placement has seemed to massively improve, to the point where I often leave placements on ‘automatic’ because I end up with a better end ROAS, but the audience network is the most common culprit for wasted spend, especially if you’re looking to get video views from a cold audience.

How to avoid making the same mistake:

Go to the ‘Performance and Clicks’ pulldown menu in ads manager, and then use ‘Placements’ in the ‘Breakdown’ pulldown menu to see if there are any Placements which are performing above or below the average.

If you see that you’re spending lots on the audience network and not getting results, then you might want to turn it off in future.

You do this at the ad set level, select the ‘Edit placements’ radio button instead of ‘Automatic’ and untick the placements you don’t want. Caveat – As mentioned, this is an area that I am encouraging people to play around with a bit less recently – it’s worth testing, but I’ve seen many examples of CPM’s increasing significantly when you remove too many placements.

Pillar 5 – Customer Avatar/Personas

When it comes to defining their customer clearly (if you don’t know who you’re selling to, it’s hard to speak to them in an appealing way) there are two related/intertwined mistakes I see made most often.

Biggest Mistake: They don’t define their target customer at all in the first place, and just use generic language that (sort of) appeals to everyone.

  1. If they have defined an avatar, they’ve lumped everyone in together, to some amalgamation of all their customers.

Explanation: Generic language speaks to (and disqualifies) nobody. Buying is first and foremost an emotional decision, and if we don’t trust the person selling to us, we’re not going to buy, so you need to show that you UNDERSTAND THEM, and UNDERSTAND THEIR PROBLEMS.

How to avoid making the same mistake: First, define all the different groups of people that buy from you, there should be at least 3, but if you’ve got loads, then just identify the biggest few. Each of these personas will have different opinions/goals/pains etc, so once you’ve done that, ask yourself the following questions for each one:

  1. For each one we want to know the basic demographics that define them: 

    1. age,

    2. gender,

    3. location,

    4. income…

  2. Then the psychographics that relate to what you’re selling:

    1. What do they want?

    2. What do they care about?

    3. Who are their enemies?

    4. What are their dreams?

    5. What do they believe?

    6. What are their suspicions?

    7. How have they failed before?

    8. What are they afraid of?

Then when you create an ad campaign, create it for just one persona at a time, and craft your message and your offer to match them.

Pillar 6 – Copy/Offer

Biggest Mistake: Copywriting is a huge topic, but you don’t have to be a world-class copywriter to get results from Facebook ads – the biggest mistake I see being made is talking about you, not about your clients.

Explanation: This follows on from the above customer persona section – because if you don’t have a clear picture of who your ad is for, then you can’t write for them. But you need to write for them, because talking about yourself is NOT going to appeal to them. “We are the biggest supplier of…”“I am a skilled teacher and can do…”This isn’t interesting to the reader, and will not get them to click.

How to avoid making the same mistake: WIIFM – Every time you write a sentence, read it back and ask yourself (from your reader’s POV) “What’s In It For Me?” If you have a clearly defined picture of who you’re writing for, then you can go through everything you write and make sure that it’s relevant to them, their hopes, dreams, goals, objections, fears…

Pillar 7 – Visuals

Biggest Mistake: Not testing them.

Explanation: The PRIMARY job of the image/video that you use is to get enough attention to stop someone scrolling for a split second, so that they can scan the ad copy to see if it’s relevant/interesting.

If you just chuck up one photo and never try anything else, who knows how much money you’re leaving on the table.

How to avoid making the same mistake: Effective attention-getting-visuals tend to fit into one of 3 categories:

  1. The target market Show an image/video of the type of person you’re speaking to – they will pay attention because it’s relevant to them. For example – if you run a food truck, then a photo of your customers eating an awesome looking burger in front of a recognizable place/landmark in your town.

  2. The problem/solution/aspirations Demonstrate either the issue at hand, or your product/service solving that issue – again, people will pay attention because it’s relevant. For example – If you sell waterproof hiking shoes, you could show someone with wet socks looking miserable.

  3. A pattern interrupt. Something that just seems out of place will get attention (read Purple Cow by Seth Godin), but beware using ‘wacky’ but irrelevant images/videos for the sake of it. these might get people to stop/click, but it’s likely doing nothing to qualify the right people. For example – I saw a FB ad a while back that was just a picture of a cute dog, with a headline along the line of “Instead of you seeing a boring advert, I’m paying to show you this pup” – it got my attention, but that was that.”

So find (or create) a bunch of images and video that fit those categories and see which gets the best Click-Through-Rates and the most conversions.

Caveat- you can of course, also use the video in your ads to teach/inspire/sell directly, but remember that without getting initial attention, your efforts will be passed over, and you still need to be testing different variations.

Pillar 8 – Landing pages
 

Biggest Mistake: S L O W loading times.

Explanation: Your landing page is the page that you send people to if they click on your ad. It could be a simple blog post, a product page on an e-commerce store, a booking page for a cafe, or an opt-in page where someone can give their info in exchange for a download/course/freebie.

Landing pages are consistently given less attention than they need especially compared to the ads sending people there, which is crazy because it can easily increase/decrease the ROI on your ads by 100-500% or more. and the biggest culprit is loading speed – how long it takes for your website to load for the viewer. According to Neil Patel “Nearly half of web users expect a site to load in 2 seconds or less, and they tend to abandon a site that isn’t loaded within 3 seconds.” 

How to avoid making the same mistake: Google ‘pagespeed insights’ and click the top link, then enter your website/page. All those things that appear, they are all costing you money. ‘Eliminate render-blocking resources’ ‘Defer unused CSS’ ‘Properly size images’ – it’s all geeky stuff, and it all counts – so find a website developer and pay them to fix it. The great thing about speeding up your site is that it’s going to pay for itself over and over and over. If you’re paying money every month to run ads, then it’s worth paying a one-off fee to increase your conversion rate overnight.

Pillar 9 – Funnel/Strategy

Biggest Mistake: Randomness

Explanation: To put it bluntly – most businesses don’t have a plan when it comes to FB ads. They tried a couple of ads that worked, but now they aren’t working so well, and they just keep throwing things up without much of a clue.

How to avoid making the same mistake: It’s not complicated, not groundbreaking. but it is effective. You find an established business like yours, that’s already running ads, and you ‘model’ what they’re doing.

And the great thing that came from Facebook’s privacy stuff is that all this info is publicly available. Here’s how to you find it:

– Find known successful companies on FB – OR search keywords for your niche – Look for the ‘Page Transparency’ box on the right.

– And if they’re running ads, Facebook will tell you.

– You click on ‘Go to Ad Library’

– And there you go, all the ads that they’re currently running.

– You can click on them, follow their funnel, see what they’re doing.

– And model it for your business.

This isn’t perfect, and you can’t just copy/paste a funnel from another business, but it gives you a starting point, and if you model what a similar business is doing, adapt it to your own products & clients, then test from there, you’re likely going in the right direction, rather than driving around without a map.

There you go – avoid these 9 mistakes and you’re probably halfway there.

 
  1. The hardest part of working on Facebook is working with Facebook.

  2. Set your conversion objective for business goal, even if you can’t exit “Learning Limited”. Cheaper results.

  3. You can get incredible results if you go “Broad” targeting. This means no targeting parameters. But first you have to groom your Pixel Metadata with Lookalikes, retargeting, etc.

  4. Videos are gold.

  5. Play it white hat. The “gurus” who teach you “scaling tactics” with duping and running small ad sets either haven’t advertised in 3 years or they are just saying what someone else told them.

These 5 rules will help any budding FB Advertiser. 

 

What’s your favorite FB hack?

Before running an ad for my target country, I run the same ad for low-cost countries like African and Asian countries to gather insane amount of Likes, Shares, and Comments.

Then I use the same ad to run for my target country. The likes and shares serve as a social proof that the ad is worth watching.

This is a common strategy 🙂 But you don’t have to run the ad to third world countries – you can simply run it optimized for Engagement in the US (or wherever your target market is). Engagement-optimized campaign CPMs go as low as under $1.

It’s always better to accumulate social proof (especially comments) from your native country’s users.

 

How I Scaled An Ecom Brand From $45K To $120K In 30 Days

Your Landing Page/Purchase Flow and your offer.

I rarely see people testing landing pages, and even rarer, I see people talking about offers.

But changing these 2 things allowed me to scale an ecom brand from $45K/m to $120K/m within 30 days.

How?

Improving both Landing Page and Offer resulted in a conversion rate increase from 1.38% to 3.35%.

Let’s dive right into it, and hopefully, you can get something valuable out of this post:

Landing Page/Purchase Flow:

What is the purchase flow?

The purchase flow is each step that a customer has to take to buy the product.

A standard purchase flow usually looks like this:

Product Page – Add to Cart – Cart Page – Checkout – Purchase.

—-

In the brand I’m using in this example, the purchase flow looked like this:

Homepage – Offer $120 AOV Product Bundle (they have the option to add to cart here) – Product Page – Add to Cart – Cart Page – Checkout – Purchase

—–

Which in itself is a rather long flow with a high AOV. Generally speaking, you want to keep your purchase flow as short as possible to prevent drop-offs.

How a short purchase flow may look like:

Product Page – Add to Cart Button – Checkout (Skip cart page) – Purchase

Note: You might want to add upsells on the cart page, so this flow is not always ideal. It could also very well be that you need to explain your product to convince people to buy it, which is why e.g., sending people to a homepage or specific landing page can also be better than sending them straight to the product page. You need to test here.

So, the landing page from people who came from Facebook was the homepage combined with a relatively high AOV product bundle (2 products) for $120.

This did a decent job at selling the product, and the conversion rate was 1.38%, with an AOV of $120.

So our revenue from 100 visitors looked like this:

(100*0.0138)*120 = $165

So, our RPV (Revenue per visitor) was $1.65 ($165/100)

This offer was not profitable for the client. The overall ROAS was way below the ROAS Targets, and I knew I needed to change something. However, on the ads side of things, everything looked great.

So, here’s what I changed:

  1. Landing Page

First of all, I started by redirecting the traffic to the product page to see if this affects the conversion rate.

This, however, wasn’t a success because the conversion rate didn’t increase significantly. In addition, the Facebook Ads were still unprofitable, and I knew a greater change needed to come. So, I built my specific landing page for that product bundle.

Since I’m not the greatest at building landing pages or writing landing page copy, here are two excellent guides where I learned a lot:

Landing Page example1

 

How My Landing Page Structure Looked Like In Order:

Hero Banner (With a button that automatically scrolls to buy section)

“Featured In” Part

Why “Product” Part

Reviews Part

Guarantee

Product Buy Section

Reviews

How The Purchase Flow Looked Like:

Landing Page – Scroll Down – Add to Cart – Cart Page /w new Upsell – Checkout

I follow the structure from the 2 guides above, so if you’re interested in building your own landing page, I highly suggest you check them out!

Note: I always use GemPages for landing pages, so if you’re a Shopify store owner, I’d suggest you use GemPages to build your Landing pages. ShoGun is also pretty good, but I prefer GemPages.

While the new landing page did a slightly better job selling (Conversion Rate increased from 1.38% to 1.7%) than either the product page or homepage, this still meant the Facebook Ads were just barely even profitable. So a more significant change needed to be made.

I changed the offer.

2. The Offer

Before, we were selling a product bundle upfront for a $120 AOV with now a 1.7% CV Rate, which meant we were getting a $2.04 RPV (Revenue per visitor)

Here’s what I changed:

I advertised a lower-priced AOV product with a discount on the landing page (core product) and instead created an in-cart upsell with the old 2nd bundle product. So if customers bought these 2 products, it was basically the same bundle as before.

How the numbers changed:

AOV: Decreased by 10% (which was to be expected) from $120 to $108.

CV Rate: Increased from 1.7% to 3.15%

RPV: Increased from $2.04 to $3.78, which is a huge change.

So from the start ($1.65 per visitor) to the end ($3.78 per visitor), I was able to increase the revenue per visitor by $2.13, which is an increase of 129% just by changing the landing page and offer.

TL;DR: By changing the Landing Page and offer from a brand I was able to increase their revenue per visitor by 129%.

I hope I could show you with this post that it’s not only your Facebook Ads you need to work on. In the end, your ads + homepage are connected, and even something as simple as the offer can have a significant impact on your conversion rate.

 

Facebook Ads: How iOS 14 will affect your campaigns

Campaigns will be affected in a variety of ways including:

  1. Delayed Reporting: Real-time reporting for iOS devices will not be supported, and data may be delayed up to 3 days.

  2. No support for breakdowns: For both app and web conversions, delivery and action breakdowns, such as age, gender, region, and placement will not be supported.

  3. Attribution Changes: The attribution window for all new or active ad campaigns will be set at the ad set level, rather than at the account level. Additionally, going forward, 28-day click-through, 28-day view-through, and 7-day view-through attribution windows will not be supported for active campaigns.

  4. Targeting Limitations: As more people opt out of tracking on iOS 14 devices, the size of your app connections, app activity Custom Audiences, and website Custom Audiences may decrease.

  5. Dynamic Ads Limitations: As more devices update to iOS 14, the size of your retargeting audiences may decrease.

  6. Limited to 8 conversion events per domain: You’ll be restricted to configuring up to 8 unique conversion events per website domain, and ad sets optimizing for a conversion event that’s no longer available will be paused when Facebook implements Apple’s AppTrackingTransparency framework. Businesses that use more than 8 conversion events per domain for optimization or reporting should create an action plan for how to operate with 8 events maximum. (Note: Facebook will automatically configure the events most relevant based on our activity)

  7. (There’s more, especially for mobile campaigns, but you can read about it at the link at the bottom of my post)

Action Items:

  1. We’ll want to preemptively verify our domain ownership in Business Manager. This will allow us to have authority over which conversion events are eligible for our domain should we choose to do so:  Apple dev verification

  2. We’ll have to be vigilant in terms of keeping these changes in mind when assessing campaign performance. For example, our FB ROAS will likely appear to be lower in the coming days and we may not be able to simply look at yesterday’s data when assessing performance. Instead, we may need a 3-day window.

  3. This will likely affect Google Ads as well, but I have not seen Google release a document outlining the specific impacts this will have. For now, we can assume that what’s happening to Facebook will be the same for Google.

Details here

 

How to Make a Good Landing Page: The PPC Advertiser’s Guide

Knowing how to make a good landing page makes a massive difference to your pay-per-click (PPC) advertising campaigns. When you design a landing page that offers a better user experience, you’ll see marked improvements in key metrics, including your Ad Rank (Quality Score & CPC), bounce rate, and conversion rate. As these factors improve, your costs will fall, ultimately helping you earn a higher return on investment (ROI).

In this guide, we’ll show you how to make a good landing page, covering each vital step to make it easy for you to deliver an experience people won’t forget.

What are the most critical aspects when designing a landing page?

When you’re learning how to make a good landing page, you should focus on the following:

  1. Relevancy of landing page

  2. Define your unique selling point (USP)

  3. Show your product/service in action

  4. Tell people what they need to know

  5. Make your landing page mobile-friendly

  6. Simplicity

  7. Make your call to action clear

  8. Remove distractions

  9. Provide transparent policies

  10. Leverage social proof

  11. Minimize loading times

  12. Build engagement

  13. Optimize for voice search

  14. Social Sharing & Feeds.

  15. Test and update

Let’s look at each one in more detail.

1. Relevancy of landing page

Here’s a common mistake in PPC advertising:

You promise one thing in your ad, but when people click it, your landing page fails to deliver that promise. For example, your ad may offer a 10% discount on brake pads, but when people arrive on the landing page, it offers a 5% discount on brake discs.

This inconsistency will deter users, and your business will lose out on possible leads and conversions. You must create relevant landing pages that align with your ads — and with user intent.

2. Define your USP (unique selling point)

Is your ad and landing page closely aligned now?

Good. Now, it’s time to define your unique selling proposition, which is how you differentiate your offer from your competition.

Your ad may address a problem that your target audience needs to solve. With a strong USP, you can show prospects that your product or service is the best solution available.

For example, if you are a quality pizza delivering company and you are best at coping with your delivery time you must emphasize your quality and your delivery time on the landing page.

3. Show your product or service in action

Humans are visual creatures. If they see products or services in action, their appreciation and desire to have it will increase.

You can experiment with these ideas to improve engagement on your landing page:

  • Still photos

  • Animated explainer video

  • User tutorial video

  • Carousel shots that highlight specific features

  • Infographic

Also, it gives you a chance to explain the product or service in more detail, answering any common queries, and dispelling doubts before they arise. For example: if your landing page is having steps to complete by the user, escort them in a way that keeps the interest active for the user. Like:

Step 1: Fill the form

Step 2: Get the offer

Step 3: Get Paid

4. Tell people what they need to know

Nowadays, there is zero room for fluffy content, especially in paid advertising. Your ads and landing pages must get to the point – fast!

Use your landing page to explain only vital information that prospects need to know, such as:

  • Benefits of your product or service

  • Pricing and purchasing options

  • Business contact details including physical location and phone number

  • Social media channels and email address

Focus on the essential information to maintain interest and build credibility with your landing pages.

5. Make your landing page mobile-friendly

In the mobile age, nobody wants to deal with confusing websites. Therefore, you must create landing pages that offer smooth and straightforward navigation, right to the point of sign-up.

Make your landing pages mobile-responsive, so users on smartphones and tablets can quickly scan through the page, and complete any action that’s required.

Here are a few pointers:

  • Compact images – Make your images small (in dimensions and file size). This will speed up your loading times and make pages easier to view.

  • Reduce typing demands – Keep things simple for users.

  • Avoid auto-downloads – This annoys users by taking up space in their device.

  • Avoid auto-play videos – Intrusive audio can embarrass or annoy users, especially if they are watching videos in a public place.

  • Minimize animations – Use color effects and GIFs sparingly to speed up loading times. Provide animation if it is really required to show some demo otherwise don’t use it.

6. Simplicity

Learning how to make a good landing page may seem scary, but here’s the best tip of them all:

Keep it simple.

Here’s how:

  • Simple and direct copy

  • Clear, direct headlines

  • Minimalist design with plenty of white space to enhance the information rather than hiding it.

  • A clear call-to-action (CTA) that tells users what you want.

  • Fewer colors

  • High-readability

Here is the example of clutter vs. simple and clean landing pages.

Keeping it simple will lead to better results in terms of engagement, clicks, and conversions.

7. Make your call to action clear

No landing page is complete without a strong CTA.

Whatever your product or service is, and however you make your offer, you need CTAs at decision points on the page to drive action.

Consider these strategies for better CTAs:

Less is more

It’s a good idea to avoid having too many CTAs. It may be best to use just one at the very bottom of the page. That being said, having another CTA above-the-fold is a popular choice.

If you decide on that, make sure you also include vital information above-the-fold, so users have those details to guide their decision.

Make it count

Have you ever seen an action button with the word “submit” on it?

This is a common choice, but not a great one because it lacks strength and inspiration. Instead, you want to incite action.

Create a stronger CTA that gets people to react. For example, “Don’t miss out on your FREE download” is better than “download now.”

Step-by-step structures

Outline how easy your visitors will find your product or service to use. With clear, easy-to-follow directions, the value of your offer becomes undeniable — and often, irresistible.

8. Remove Distractions

Here’s something you should keep in mind when you want to know how to make a good landing page:

You must focus on a single conversion goal. Just one.

Therefore, anything else that distracts from your goal is surplus. Get rid of all distractions, external links, and unnecessary CTAs, images, or information that dilutes your message or invites users away from your landing page.

Ideally, you want to streamline the journey on your landing page to funnel leads to your final CTA.

9. Provide transparent policies

As we move into 2020, consumer privacy matters are at an all-time high. The data breach scandals of Facebook, Yahoo, and Quora caused panic, and the General Data Protection (GDPR) regulations have taken effect across the globe.

Now, you must be transparent with the processes and practices you use for collecting, storing, and sharing consumer data. If people can’t trust your brand, you’ll never make a sale.

Follow these tips to nurture trust with people:

  • Use cookies toolbar to notify people that you track on-site behavioral data.

  • Use terms and conditions page to outline what your business is responsible for, and what it’s not.

  • Share your privacy policy, so people understand how you use consumer data.

  • Publish an FAQ page that answers common questions people may have about your brand, and your products and services.

10. Leverage social proof

Imagine your company provides analytics services to major corporations. Once you have one or two big clients in your portfolio, you can leverage those relationships to convince others to convert.

By getting positive reviews, you’ll have strong social proof from happy customers — that pay well. That can be enough to sway other top-tier clients.

To maximize this strategy, try to get video testimonials. Video content is much more engaging, and it will be a high-impact addition to your landing page.

11. Minimize loading times

Speed is crucial in the customer journey. Nobody wants to wait around for a slow website to load, especially on mobile.

Here are some tips to slash your loading times:

  • Use Accelerated Mobile Pages (AMP), as this is an important ranking factor of Google’s Mobile and Desktop Indexes.

  • Use compact-sized images and files.

  • Minify your HTML, CSS, and JavaScript files.

  • Opt for client-side scripting rather than server-side.

  • Use CDNs (content delivery networks)

  • Reduce redirects

  • Enable compressions

12. Build engagement

Shoppers have a lot to choose from online. You need to work hard to convert prospective new customers, tailoring your marketing tools and techniques to engage your site visitors in ways that they appreciate.

For instance, you can harness data insights with a live chatbot feature, or utilize pop-up discounts that cater to each visitor’s interests.

These techniques keep people on your page and make them consider your offer or brand as an option.

13. Optimize for voice search

In 2019, voice search enjoyed significant growth, primarily driven by the improvements in voice-enabled technology. Alexa, Siri, Cortana, and Google Assistant are battling it out to be king in voice-enabled devices, and with it, they are changing search engine optimization.

How?

Well, people who use voice search tend to do things a little differently than those who do a regular text-based search.

So, when you’re thinking of how to make a good landing page in 2020 and beyond, you should think about the following:

Focus on user intent

When people use voice search, they usually have a particular need, such as:

  • The address or opening hours of a store.

  • The price of a specific product.

  • Whether a business offers a specific type of service etc.

Keep user intent in mind to create content that answers specific questions, providing answers to things people want to know.

Google may be a smart search engine, but it needs all the help it can get. The better you optimize your content, the easier it will be for Google to analyze it — and promote it.

Use schema markup

Schema markup makes it easier for search engines to comprehend the content of a webpage. Consider your website, your audience, and the CRM editing capabilities to use the right schema markup that will help you get noticed by voice searchers.

Use long-tail keywords

Voice search queries are typically conversational in style, often framed as questions or full, grammatically-correct sentences.

You can incorporate these long-tail, conversational keyword phrases into your landing page content to attract targeted traffic. As a bonus, this defined traffic is often cheaper.

14. Social Sharing & Feeds

Show your social feeds and tweets on your landing page to show your presence on social media. Once visitor purchase or do some conversion, make it easy for them to brag about their purchase and share their experiences by adding links to all types of social media. It will increase your credibility and presence on social platforms.

15. Test and update

Like everything else in PPC advertising, your landing pages are not a set-and-forget task. Once you publish your landing pages, you must keep an eye on the analytics to gauge their performance.

Try A/B testing several ideas to determine the most effective version of your landing page. For example, you could test out two versions with different:

  • Headlines

  • Benefits

  • Images

  • CTAs

  • CTA positions

Run variants for a while, gather the data, and then analyze it to identify which version generates more clicks, leads, and conversions.

This process of testing and monitoring should be ongoing, helping you continually update and improve your landing pages, eliminating flaws, and optimizing strong points to create the best possible user experience.

Remember only to change and test one aspect at a time. This makes it easier to determine the impact of the change. For example, test images one week, then pick the best image. Next week, test headlines, then select the best headline. The following week, test CTAs, etc.

Wrap Up

So, now you know how to make a good landing page. By analyzing these areas and putting in the time and effort to optimize each one, you’re sure to see dramatic improvements.

PPC advertising requires patience and strategy, more so than a big budget. Learning how to optimize your landing pages is crucial to maximizing your ROI.

Is Organic Search Traffic from Blog Posts superior to Google Ads?

From my experience Google ads cost me $0.80 per click. Of course it depends on the niche. So it might vary.

Now for $10 I can find someone on Upwork who writes me a 1000 word blog post. Again it depends on the niche. But that’s been my experience.

So $10 spent on Google ads will give me 12 clicks. Wouldn’t a $10 blog post give me much more traffic than 12 clicks over the years? Assuming it has a good headline and maybe some tags.

If I had to bet, I would bet that the blog post over time would far outperform the Google ads. But I don’t yet have the data. So I’m curious what you think about that?

Answer: 

The blog probably would get more unique visitors, yeah. But are they qualified, are you selling them in the blog post, does your $10/article writer understand their needs and have experience on writing copy that converts?

With ads you can filter your keywords to find customers who are warm and are actively looking for a solution, it’s a little harder for articles on that front. E.g. a search for ‘welders in hackney’ would be a solid term to target with ads, but an article written on that topic probably wouldn’t rank well enough without a lot of research on the companies, finding out their pricing, services offered and enough unique and smart content to rank above those services own websites.

If your plan is to replace every advert keyword you’re targeting with a $10 blog post, you’ll end up with hundreds of really low quality articles that Google will recognize as low-effort and out of sync with the searcher’s intent and you won’t rank for anything.

Blog post with SEO included that ranks for specific keywords will have a good roi. But just make sure it is quality content as $10 content is likely to be worth exactly that.

 

What advice would you give someone wanting to learn google ads in 2022?

  • Working on an actual account will teach you more thing s than a course

  • Take a course only to cover the basics for developing strategies work on an actual account

  • Always look out for new features in ads manager, as Google is often biased towards new features and provides results at cheaper costs

  • Courses are a great start but nothing beats just running ads. Personally I think there is more than enough free info on YouTube to last a lifetime…..and good info too.

    Learn the basics. Understand each feature in the dashboard. You’re general marketing experience with FB will help you.

    I would recommend taking a client up on the offer or running ads for yourself to learn.

  • The best way to learn google ads is by doing so. Do not buy a course! Google has some beginner courses (skillshop) take some of these and than ask an ngo if you can work for them. For ngo‘s google ads is free so it is a nice why to get to know the interface and everything around. And after than maybe you are able to go to an agency, there you could learn a lot.

  • Large Shopping Restructure - 1.8m Feed
    by /u/Alert-Comedian9512 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 9:19 am

    Been given a mammoth of a task to restructure a clothing ecomm business that has 1.8m in their feed. Very overwhelming, but any you guys been on accounts this big before and what was your approach to restructuring? submitted by /u/Alert-Comedian9512 [link] [comments]

  • is there a way to automatically upload facebook ads
    by /u/osaf32 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 8:04 am

    is there a way to automatically upload facebook ads im managing facebook ads for a client and we've got tons of visuals for work with for 50+ products. Each has post and story sizes. Takes us ages to set up a campaign. Is there any way to automate this? submitted by /u/osaf32 [link] [comments]

  • Geo locations by ad-group
    by /u/alpho1234 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 6:59 am

    If I have a general ad-group targeting "plumber near me" and I create another ad-group targeting a specific location in the headline e.g. "orange county plumber near me" (for a better QS) Would I put "orange county" as a negative keyword in the "general ad-group" so that it does not compete with each other? submitted by /u/alpho1234 [link] [comments]

  • B2B saas how should I set up campaign for a job board?
    by /u/SayNo2Tennis (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 6:22 am

    So i have a client that want to run marketing for a very job board for engineers, with some niche filtering, how do I market? How should I go about Targetting? I been scartching my head for an hour Human resources managers? Or who is else submitted by /u/SayNo2Tennis [link] [comments]

  • Best practice for adding sitelink extension?
    by /u/chadendra (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 6:10 am

    Hey guys, I'm just starting with Google ads and had a bit of confusion. What are we supposed to add in the URL field for the sitelink extensions? Is it the landing pages or your webpages? Are there any best practices you follow while adding sitelink extension (or just extensions) submitted by /u/chadendra [link] [comments]

  • Does changing ad copy reset learning mode on TCPA campaigns?
    by /u/thegarykellys (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 6:00 am

    I've recently updated a few misspells on the main RSA within a TCPA campaign and I was wondering if this has the potential to trigger the campaign re-entering learning mode? There is one other RSA in the same ad group that has been unchanged. submitted by /u/thegarykellys [link] [comments]

  • Fb internal server error (500)
    by /u/MRK016 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 5:52 am

    I have a question about getting leads on Facebook! On my business account: when I login to the leads centre I keep getting 500 internal server errors! Has anyone experienced this? submitted by /u/MRK016 [link] [comments]

  • "Near me" KW in ad copy, how does that makes any sense?
    by /u/Bholenaught (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 4:41 am

    How can I improve the ad relevance score from below average to above average for KW containing "near me". Adding it in ad copy doesn't make any sense and I haven't found a solution to overcome this. I want to improve the ad relevance in my ad group by improving the ad copy for "near me" KW, so any specific suggestions or tests or ad copy structures for RSA that you might have tried will be really helpful. It's for a Nation-Level Law firm and I'm collecting email leads on a landing page. submitted by /u/Bholenaught [link] [comments]

  • My streetwear brand journey so far...
    by /u/711caleb (Entrepreneur) on May 26, 2022 at 4:22 am

    Hey guys, I am 18 year old Clothing brand owner from Australia, over my teen years I have probably tried 20+ failed business start ups mainly revolving around streetwear, music and art. last September I started a clothing brand, I paid $2,000 for our bulk order in which my manufacturer shipped to the wrong country and it got lost. I never got my money back and after this I lost all motivation and took some time off. in recent months I have started it up again fully rebranded for a fresh start. our first drop is coming up on the 28th and its looking like it will be a success with many people interested in purchasing. I'm working really hard to make this work as it is what I want to do with my life. I've been in fashion school since the start of 2022 and it just isn't for me I very recently dropped out and am taking a risk to put everything I've got into this, it is scary but exciting. I think I just know this is going to work. sometimes you just know. it wont be easy it will probably be quite hard at points but I just know it will work I cant explain why and I don't mean this in a cocky way as that's not who I am but I just think feel like is right, I'm sure some can relate to this? anyways this brings us to where we are now. if anyone would like to support you can find us @ pistolkiss on ig. if anyone wants an update lmk. thanks so much if you read this!! <3 submitted by /u/711caleb [link] [comments]

  • Question about branding
    by /u/Acrobatic-Bat-550 (Entrepreneur) on May 26, 2022 at 4:00 am

    Good day all, I have a question about branding. If someone wishes to sell electronics or furniture and they have a clothing brand, do they have to create a brand for their electronics and furniture or do they put it under their clothing brand? submitted by /u/Acrobatic-Bat-550 [link] [comments]

  • Reddit ads
    by /u/BlondieFunk69 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 3:29 am

    Has anyone here ran reddit ads? If so what was your campaign goal and did you achieve it? Could you share your experience advertising in reddit? submitted by /u/BlondieFunk69 [link] [comments]

  • Entrepreneurs in fashion & clothing, how did you create your own design?
    by /u/GuerroCanelo (Entrepreneur) on May 26, 2022 at 2:03 am

    I heard that people send their clothing designs to be made in china. If I have an idea that cannot be express by a “shirt and logo”, how would I tell them what I want made? submitted by /u/GuerroCanelo [link] [comments]

  • Dynamic creative element breakdown (Facebook ads)
    by /u/loic4444 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 1:51 am

    The other day was able to see the Dynamic creative element breakdown but since then it has disappeared, how do i get it back? submitted by /u/loic4444 [link] [comments]

  • Quora for certified experts viability?
    by /u/Will_Tomos_Edwards (Entrepreneur) on May 26, 2022 at 1:49 am

    It would be possible to make a version of quora where: People must pay to ask questions Only verified experts such as doctors/lawyers/professors may answer the questions The verified experts get paid based on how many users view their answers etc., The challenge is that you need users for revenue, and you need verified experts in order to get the users. Such verified experts won't come cheap. Any ideas about whether or not this is viable? submitted by /u/Will_Tomos_Edwards [link] [comments]

  • I see a lot of comments about learning to code. but learn to code what exactly?
    by /u/whidzee (Entrepreneur) on May 26, 2022 at 1:12 am

    What kind of things are you recommending? Sure the language is one piece of the puzzle. But rather than just learning python or c#, what goal are you thinking one should aim for? I've got some friends who are engine and AI programmers for big games companies. But I'm not sure that is what you guys are thinking of when you say learn to code. submitted by /u/whidzee [link] [comments]

  • Is there a blueprint for the expansion phase of your company?
    by /u/ccjjallday (Entrepreneur) on May 26, 2022 at 1:03 am

    We've started to grow in the solar panel industry and we just purchased our first warehouse. We need some guidance in terms of how to properly structure the company, implement processes etc. I've found and followed plenty of startup resources, but I need help getting to the next step. Any help is appreciated. submitted by /u/ccjjallday [link] [comments]

  • What skills or resources do you think are critical for an absolute beginner to learn?
    by /u/Beginning-Ad2676 (Ads on Google, Meta, Microsoft, etc.) on May 26, 2022 at 12:04 am

    Hi, I've always had a major interest in social media marketing and getting pages and posts to perform well online. I don't have much experience but I've run some Google ads for my sister's business back when she had one which is where I first learned about social media marking and digital marketing in general. I really want to learn more and develop my skills. I've so far found some courses from really famous YouTubers like Iman Gadzhi's "Agency Incubator." The courses are really expensive but I found some reddit posts on r/PPC and r/Marketing that said the courses are overpriced and that the fundamentals can be learning online. I was wondering if anyone could share some resources to start learning? Or maybe a list of topics that you think would be fundamental to anyone getting involved in digital advertising to master. I hope this will be helpful to others like me who are just starting out and are worried about buying a very expensive course (of which there are MANY being sold by the YouTube "gurus"). submitted by /u/Beginning-Ad2676 [link] [comments]

  • My Old "Friend" Just Tried To Steal Half Of My Business From Me
    by /u/Jusgil24 (Entrepreneur) on May 25, 2022 at 11:42 pm

    When I first started my entrepreneurial journey I thought like many others that it would be really nice to have a friend accompany me on this journey, and while it was nice for about a month that quickly changed when my friend lost the motivation to keep working. Now a lot of this loss of motivation was that we weren't seeing results right away and while I understood that making this SEO business work was going to take time, my friend on the other hand was not so understanding and jumped ship at the time stating "Maybe it's just not for me man." It angered me to hear that but at the same time, I understood that owning a business isn't for everyone and luckily we didn't have any clients so the breakup wouldn't really be a difficult task. Fast forward to over a year and a half later, and I'm still working on my SEO business that he abandoned and have been able to uplift myself from the dirt and land a handful of loyal clients for myself who have been ranking very well as of lately (fingers crossed it stays like this). All in all, things have been looking really promising and I had no complaints until the other day when that friend recently messaged me out of the blue.... It started with some small talk, then he asked me about the business so I told him about the success I've been having, then he went into how he had lost his job and how life was hard right now. I felt bad for him and may have even offered to hire him as an employee but he then tries to tell me that I should buy him out of his half of the business or be giving him royalties. I was so taken back that I thought this was actually a joke but he continued on as he was dead serious saying that he helped me start the business and was an original founder. Needless to say that conversation went south very fast and aside from no longer being friends I told him he could see me in court if he really felt entitled to his money from my business. All in all, nothing has really progressed since then, but I think this was a prime example of why friends in business don't always work out. I'm a little worried about wasting money on a court case despite feeling confident I'd win as nothing of his is on any legal documents for my company. But yeah this was definitely a bummer situation and I'd be grateful for any advice you guys could give me. submitted by /u/Jusgil24 [link] [comments]

  • What are some businesses that can run themselves?
    by /u/noe319 (Entrepreneur) on May 25, 2022 at 11:06 pm

    I’m specifically thinking of businesses that can theoretically run with minimal effort once they are up and going. I know I’d have to put in some years of hard work first but I’d like to have a team run it and allow me to move on to the next thing afterward. Liquor store is one that crossed my mind as a possibility. Seems pretty straightforward. submitted by /u/noe319 [link] [comments]

  • The table is set: I want to resell clothes
    by /u/ItsTyler26 (Entrepreneur) on May 25, 2022 at 10:39 pm

    Teens now love thrift stores and buying clothing like that. I’m thinking of reselling those clothes from my local thrift stores. I just want to hear your thoughts, should I make a website to sell off of and pay for a domain, or keep it simple on EBay? Anything I should know? Thanks. submitted by /u/ItsTyler26 [link] [comments]

  • YouTube retargeting for personal injury lawyer?
    by /u/saasnewbie (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 10:31 pm

    Hi, I have a client who is a personal injury lawyer with a small team. They're in WA state population 41,000 people and they're wanting to grow. ​ We do SEO and Google ads with a total of 1000 visits last month and 34 leads. Is YouTube retargeting ads a good idea to get more leads for my personal injury attorney client? ​ Thanks submitted by /u/saasnewbie [link] [comments]

  • Ads For Crowdfunding Campaigns
    by /u/NegativeStreet (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 10:10 pm

    Anyone on this subdreddit run an ads for a crowdfunding campaign before? If you have any words of wisdom would be appreciated, or if there is a possibility to connect that would be even better! Some general questions- What audiences are most likely to invest? Did you just target people who would have interest in the product? Or did you target people who are interested in investing/angel investing? What channels did you find the best? ​ Ideally input is provided for non-ecommerce related campaigns. However I will take any information I can get! TIA submitted by /u/NegativeStreet [link] [comments]

  • 16 year old trying to get started, any tips?
    by /u/ItsTyler26 (Entrepreneur) on May 25, 2022 at 10:02 pm

    So me and my buddies have been wanting to get something started for a few weeks no. All 3 of us are sophomores in high school and have an interest in future careers in business. We’ve recently been thinking about starting up a clothing line, however, we feel this would be more difficult than we think, considering we would have to protect our logo and name. Another idea we have is going to our local thrift stores, purchasing the best items, and selling them on our website online. What do you guys think? Any tips or thoughts on either of these? Thanks. submitted by /u/ItsTyler26 [link] [comments]

  • Best business to start as a graduated high schooler going into college?
    by /u/Drale25 (Entrepreneur) on May 25, 2022 at 9:04 pm

    Hey guys! I was just getting into the process of entrepreneurship, and I was wondering what the best business for me to get started with is. I am currently a graduated senior from high school that is going to college. I am thankful to say that I have secured a full ride to a very good business school. Therefore, I would like to become more financially independent by creating a business. The marketable skills I have now include Python and Java coding, but not to the advanced level of a professional coder. Do you guys have any recommendations in terms of which fields of business I should try out, examples being advertising, dropshipping, ecommerce, app building, website building, etc? I am thankful to say that if I come up with a good business model, then my parents will give me some startup money because of the full ride that I secured to college. Due to this, I just need to evaluate which business model will give me the best chance of success, plan it out, and then pitch it to my parents. Any advice? submitted by /u/Drale25 [link] [comments]

  • Reaching out to fellow restaurant owners
    by /u/Halo_0001 (Entrepreneur) on May 25, 2022 at 8:23 pm

    I am looking into accepting crypto currency payments at my restaurant. I believe this will create an excitement to draw in more customers due to their being no businesses around me accepting alternative currencys. I found some software companies that can make this possible (like Flexa), but I would love to hear any alternatives. If you accept crypto payments at your restaurant what is the software you use and what is your opinion of it so far? If you do not use a third party software what is your process to charge correct amounts and receive crypto payments. submitted by /u/Halo_0001 [link] [comments]

  • Advice for taking a product from concept to market.
    by /u/BiggerPrint (Entrepreneur) on May 25, 2022 at 8:17 pm

    Looking for advice from someone who has taken an product from concept to market. I have an idea for an improvement on a device for holding sidewalk chalk. There are toys out there that accomplish the same function but my idea is an improvement and adds a fun twist on what’s currently out there. I’ve made a prototype and my kids and neighbor kids have been using it non stop over the last few months and even argue over who gets to use it. It’s a pretty janky prototype but I have a crystal clear idea of the final product and how it should look. I’ve done rudimentary patent searches and don’t see anything quite like my prototype . I’m on a one year parental leave from work and I’m looking for mentorship/ advice/ expertise to help me go from prototype to market. Any advice is greatly appreciated as I would like to dive into this and see where I can take it while I currently have the time. submitted by /u/BiggerPrint [link] [comments]

  • DTC shipping companies
    by /u/L0114R (Entrepreneur) on May 25, 2022 at 7:59 pm

    Does anyone have any websites that do DTC shipping and stock holistic products? Not trying to buy stock of this stuff just want to offer it on my website. ​ Any help is appreciated thanks. submitted by /u/L0114R [link] [comments]

  • A lot of businesses started in the recession in 2008. What businesses worked then?
    by /u/BootstrapGuy (Entrepreneur) on May 25, 2022 at 7:39 pm

    Hey everyone, during the previous recession I was quite young and don't really know what businesses people started/worked very well in a recession. I'm wondering what worked during the previous recession and what didn't? submitted by /u/BootstrapGuy [link] [comments]

  • Anyone got experience selling high ticket products?
    by /u/9goddy (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 7:37 pm

    Hello I’m looking for some guidance, if anyone here has any experience marketing/ selling high ticket items successfully online with some sort of proof that they could show me, could you please get in touch? Looking for some advice on how to approach the best way as a new brand with no sort of previous marketing. Would love to get some sound advice please. submitted by /u/9goddy [link] [comments]

  • Is there any book you would specifically recommend to build an online audience?
    by /u/montecristo1212 (Entrepreneur) on May 25, 2022 at 7:01 pm

    I feel like I am talking in a empty room on Twitter, I must be doing something wrong. submitted by /u/montecristo1212 [link] [comments]

  • Do our Amazon ads spend and serve on Google search?
    by /u/slikmystr (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 6:38 pm

    I noticed that our organic Google revenue and traffic dropped right as we ramped up our Amazon ad spend. This makes me believe Amazon is distributing some of that spend to Google search. Can anyone confirm? submitted by /u/slikmystr [link] [comments]

  • Has anyone had experience transitioning from entrepreneurship to a 9-to-5 “normal” job? Market collapsed.
    by /u/i_fly_a320 (Entrepreneur) on May 25, 2022 at 6:15 pm

    Hi all. In high school, I started an online business that grew exponentially (I developed and sold digital assets for a popular platform). It continued growing throughout college, and I was making at its peak about $120k a year (it was registered as a sole-proprietorship). I continued running the business after graduating from college (I graduated with an Information Systems degree, I’m currently 24). Unfortunately, the online platform that I built digital assets for announced that it was shutting down. My sales since the start of this year have collapsed. My products only work with the online platform. They do not work elsewhere. Initially, I tried to “re-adjust” my skill set to other areas but soon realized that my business is no longer sustainable. I’ve made the decision that I need to start shutting down the business, and somehow transition to a 9-to-5. My finances are running low at this point and I need to increase income very soon. I’d want to ideally transfer to a normal job in tech, either as a software engineer or a business analyst. I don’t have a CS degree and my software dev knowledge is limited to web dev so I don’t think the SWE approach is feasible at the moment. My issue is, aside from running the business, I don’t have any other work experience. I’ve never worked a normal job. So my resume is quite empty at the moment. Have any entrepreneurs gone through something similar? Any advice you can give in this job search? How do you make up for lack in employment history? And how do I convey to my recruiter that my business was actually profit-generating and not just a side hustle with low income? Really feeling a little bit lost. submitted by /u/i_fly_a320 [link] [comments]

  • What kind of business do you run?
    by /u/1keric (Entrepreneur) on May 25, 2022 at 6:09 pm

    What jobs did you guys work in before you started your entrepreneurial journey and how is it going for you guys now submitted by /u/1keric [link] [comments]

  • Block certain users from seeing ads on google
    by /u/ChrystoReddit (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 6:00 pm

    Hello ! We have a client that asked us if it was possible to block certain people from seeing our ads on the google search network. Basically we are running an ad account for a client that gets quotes from different entrepreneurs for different construction contracts (roofing, demolition,etc) and we would like to block those entrepreneurs from seeing the ads. The only thing we have is their emails. submitted by /u/ChrystoReddit [link] [comments]

  • Can someone please verify our custom FB Pixel & CAPI & CAPI Gateway are sending the correct data in the correct format? *screenshot included*
    by /u/Tyzing (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 5:53 pm

    I want to make sure we're sending the correct info in the correct format for one of our custom events. For "clicked_apply" to trigger they have already provided us: country, city, email, External ID, first name, last name, IP address, phone, state, user agent, zip. When using the events manager testing tool I'm not really sure how the data should look from the browser due to the advanced matching. Should I see that all of that data was passed? Right now it shows it is being passed as user_data (all hashed) but on the bottom line where it says "Advanced matching parameters" it just has: IP address, and user agent" Should we also see each of the contact fields? You can see when doing the test it does deduplicate but my fear is that is only based on the same event_ID field passing, and FB is not getting the user entered data to use for better matching. Am I wrong and this is exactly how it should be displayed or does something need to change? ​ Here's a screenshot of the events manager test: https://drive.google.com/file/d/1sI5amfbsufbLKIQMDDIvFr9XNau5T3gM/view?usp=drivesdk submitted by /u/Tyzing [link] [comments]

  • What to do when you feel passion for your business fading away?
    by /u/Hockeyiscool2021 (Entrepreneur) on May 25, 2022 at 5:01 pm

    Own a streetwear/sneaker resale shop. Not sure if it’s just a depression thing which I’ve had a history with but just recently the money hasn’t been great and it feels like a chore running the day to day operations and it’s got my spirits down. This economic outlook certainly isn’t helping either. Thoughts? submitted by /u/Hockeyiscool2021 [link] [comments]

  • Combine TARGET CPA and individual keyword bidding?
    by /u/Donttellmehow2feel (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 4:57 pm

    I have a campaign with Maximize conversions (TARGET CPA) setting, and inside I have a regular ads group and a dynamic search ad group. I have heard that Maximise conversions with Target CPA lets the algorithm take care of everything and there is no point in manual bidding, is that true? At the moment, my dynamic search ad generates much more clicks and conversions than the regular one. But I would also like to bid higher on some good performing keywords on the regular ad, is it worth doing? submitted by /u/Donttellmehow2feel [link] [comments]

  • Any Tech Franchise Recommendations?
    by /u/Sequel177 (Entrepreneur) on May 25, 2022 at 4:47 pm

    Are there any tech franchises out there that a person can invest in and buy? Please mention below. submitted by /u/Sequel177 [link] [comments]

  • Meta Forecasting tools
    by /u/CageHunt (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 4:35 pm

    I need to forecast spending for a client. Campaign Planner link my rep sent is not working. Is this feature still active? What other in-platform tools/methods do you use to forecast impressions and engagement? submitted by /u/CageHunt [link] [comments]

  • At what stage did you make the decision to run your business full-time, rather than along your previous career?
    by /u/J4MEJ (Entrepreneur) on May 25, 2022 at 4:20 pm

    Title submitted by /u/J4MEJ [link] [comments]

  • How do I pick up my goods from port?
    by /u/Alanm2000 (Entrepreneur) on May 25, 2022 at 3:11 pm

    Hello! I am ordering products that sum to 12CBM from china to Miami port I have already paid the sea fright forwarder, I am now wondering how I will pick up these goods once they reach the Miami port? I am told by my supplier I will need to arrange a local forwarder to pick up the goods, does this mean I will need to hire a local trucker to pick them up and bring them to my address? If so, what is the best way to go about finding a local trucker? Thanks in advance! submitted by /u/Alanm2000 [link] [comments]

  • Tik Tok algorithm (conversion events) compared to Facebook?
    by /u/J_masta88 (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 2:57 pm

    Hey guys, what would you say is tik toks algorithm strength compared to Facebook ads? What I mean is once Facebook gets 30 to 40 conversions, it can now pretty much drill down on your target customer exactly (Learning phase). How good is tik toks algorithm in this regard ? submitted by /u/J_masta88 [link] [comments]

  • Microsoft Ads Conversions + HubSpot Forms
    by /u/JessBaz (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 2:34 pm

    Looking for some guidance on how to track Microsoft Ads Conversions via HubSpot form submissions. I'm currently using GTM to track everything, but if this requires a separate script then I'm happy to add. We are tracking these correctly in GA, but the same setup doesn't seem to work for Microsoft Ads. Been on with their support team and have gotten nowhere. submitted by /u/JessBaz [link] [comments]

  • Too many founders miss the point of early-stage market research...
    by /u/NickFreiling (Entrepreneur) on May 25, 2022 at 2:28 pm

    Pre-launch market research isn't about trying to determine exactly what % of the market is interested in your product/service. It's about identifying your lowest-hanging fruit segments. Say you survey 1,000 general population consumers to get feedback on your new personal finance app. Say you learn that 95% of them would download your app. Ok. Great. You've learned nothing. Because no, 95% of consumers are not going to download your app. Unless your app's name is Facebook. Instead, it's better to present your idea in such a way that people have obvious reasons to say no. Like, put the price up front. Or explain that it doesn't work for people who don't bank in the US. Whatever. Just lead with that. Yes, it means fewer people will express interest. But... It's more accurate than "tricking" people into thinking they want your app. You'll be left with a cohort of respondents that share some common characteristics -- dynamics you can use to target your marketing later on. In case this is too confusing, consider these two pitches: "We did a survey, and just about everyone wants this!" vs. "We did a survey, and new moms are 14X more likely than the general public to want this!" Which do you think is more compelling to a seasoned investor? Which leaves you with more action items? Which one helps you toward finding product-market fit? submitted by /u/NickFreiling [link] [comments]

  • Reviews on Performance Max?
    by /u/Alert-Comedian9512 (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 2:26 pm

    Has anyone move over to PMax from Smart Shopping? Was performance better? submitted by /u/Alert-Comedian9512 [link] [comments]

  • Smart Shopping To PMax
    by /u/Alert-Comedian9512 (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 1:25 pm

    Best ways to ensure smooth transition from SS to Pmax submitted by /u/Alert-Comedian9512 [link] [comments]

  • What's the best way to grow IG/FB/Twitter followers via PPC?
    by /u/sporenotic (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 12:39 pm

    One of my clients is launching a new brand, and they want to gain followers on their socials fast for social proof (currently only 100 or so friends/family followers). What's the best way to do this? Instagram: No good way to do this? Can you invite people based off engagement with your posts? I can't see a way to do this. Potentially create an ad including content asking people to follow, and send users to the company's IG profile. Facebook: Post content, promote with engagement campaign, invite people who react to the content manually. Run a Page Like ad with a Like Page CTA Button Twitter: Run follower campaigns All: Create amazing content, ask viewers to follow in content/copy and hope that they do? Promote competitions with 'must be following'/'tag a friend'/comment' requirements ​ Any suggestions much appreciated! Fwiw these campaigns will only make up a tiny proportion of our overall budget, as valuable leads are obviously the overall goal. submitted by /u/sporenotic [link] [comments]

  • What determines which ads get impressions?
    by /u/salko_salkica (Ads on Google, Meta, Microsoft, etc.) on May 25, 2022 at 12:28 pm

    I'm doing Google Ads for the first time in my life so forgive me for asking such a noob question. - We sell a B2B SaaS product- Our budget is $500/day- - We are bidding phrase match and exact match on high-intent keywords - SEMRush shows traffic on these keywords to be roughly around 500 a month in the geographical area we target - Our impressions of these search terms vary between 2-15 after a whole month. That's very low, and of course, we get no clicks; not even data to make conclusions from. What can we do to increase our impressions? How does Google even determine this? Bidding is set to automatic, if that matters. submitted by /u/salko_salkica [link] [comments]

  • Wantrepreneur Wednesday! - May 25, 2022
    by /u/AutoModerator (Entrepreneur) on May 25, 2022 at 9:00 am

    Please use this thread to ask questions if you're new or even if you haven't started a business yet. Remember to search the sub first - the answers you need may be right at your fingertips. Since this thread can fill up quickly, consider sorting the comments by "new" (instead of "best" or "top") to see the newest posts. submitted by /u/AutoModerator [link] [comments]

  • My business is faceplanting! Do I kill it? (and live in shame)
    by /u/Physicist4Life (Entrepreneur) on May 25, 2022 at 1:50 am

    I've spent 4 years developing dozens of hardware prototypes, writing firmware, software, drivers, a user interface, establishing an overseas CM, validating, testing. Everything works. To pay for tooling I made a business plan and brought in ~$80k of investor money. Probably 4000+ hours of blood sweat and tears on nights and weekends. I've had to provide for my family at the same time by working steady, so basically I'm a mess. Now the product is for sale and nobody is purchasing it, literally 0 customers after 2 months for sale, ~800 unique visitors/month. Recently I read two books: Zero to One and Organizational Physics, and a theme is developing. I messed up [bad]. I SHOULD HAVE had an early adopter lined up, I should have vetted this idea better before making it, but I'm not a marketing guru. I don't focus group, cold call, develop a social media following... So now what? It's is an industrial temperature data logger. My customers were supposed to be food & beverage pasteurization companies. Think canned peas, beer(small scale), or baby food. The core issue is that the market perceives this as a solved problem. Established players provide these data loggers already, and have been around for 10+ years. Competition products cost 5-8x what my data logger costs, but it doesn't matter. I can't figure out how to differentiate and/or get any traction. The competition uses their profits to pay for sales reps, so that's what the industry is used to. This is a common problem among engineer type entrepreneurs -> they build a product without understanding the market, don't have early adopters lined up, don't have a clear vision, or understanding of market conditions. 3 choices? Kill it. Tell my investors that it flopped, and (since they're friends and family), live in shame among them. This is what most business types would say to do. This feels to me like killing a child, but maybe it's what I have to do. Wait & see. Hope for a niche following. Maybe this product will lead to a connection or the next product which will sell? Maybe??? Nobody would advise this, because no actionable data is pouring in, no decisions are forthcoming, but it's where I'm at right now. ??? Maybe someone here can help? Update: Some great advice below. Thanks to everyone who commented. My main takeaway is that I need to spend real time, effort and $$ on sales & distribution. Either I have to do it or I need to hire/partner with someone who will. Google Ads are ineffective, but online content can be effective. I won't give up yet. It's worth a few more months of trying things & adapting to new data. submitted by /u/Physicist4Life [link] [comments]

Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps

Azure Administrator AZ-104 Exam Questions and Answers Dumps

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary – $175,761

The Google Certified Cloud Professional Architect Exam assesses your ability to:

  • Design and plan a cloud solution architecture
  • Manage and provision the cloud solution infrastructure
  • Design for security and compliance
  • Analyze and optimize technical and business processes
  • Manage implementations of cloud architecture
  • Ensure solution and operations reliability
  • Designing and planning a cloud solution architecture

The Google Certified Cloud Professional Architect covers the following topics:

2022 AWS Cloud Practitioner Exam Preparation

Designing and planning a cloud solution architecture: 36%

This domain tests your ability to design a solution infrastructure that meets business and technical requirements and considers network, storage and compute resources. It will test your ability to create a migration plan, and that you can envision future solution improvements.

Managing and provisioning a solution Infrastructure: 20%

This domain will test your ability to configure network topologies, individual storage systems and design solutions using Google Cloud networking, storage and compute services.

Designing for security and compliance: 12%

This domain assesses your ability to design for security and compliance by considering IAM policies, separation of duties, encryption of data and that you can design your solutions while considering any compliance requirements such as those for healthcare and financial information.

Managing implementation: 10%

This domain tests your ability to advise development/operation team(s) to make sure you have successful deployment of your solution. It also tests yours ability to interact with Google Cloud using GCP SDK (gcloud, gsutil, and bq).


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

Ensuring solution and operations reliability: 6%

This domain tests your ability to run your solutions reliably in Google Cloud by building monitoring and logging solutions, quality control measures and by creating release management processes.

Analyzing and optimizing technical and business processes: 16%

This domain will test how you analyze and define technical processes, business processes and develop procedures to ensure resilience of your solutions in production.

Below are the Top 50 Google Certified Cloud Professional Architect Exam Questions and Answers Dumps: You will need to have the three case studies referred to in the exam open in separate tabs in order to complete the exam: Company A , Company B, Company C

Question 1:  Because you do not know every possible future use for the data Company A collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?

 A. Have the vehicles in the field stream the data directly into BigQuery.

B. Have the vehicles in the field pass the data to Cloud Pub/Sub and dump it into a Cloud Dataproc cluster that stores data in Apache Hadoop Distributed File System (HDFS) on persistent disks.

C. Have the vehicles in the field continue to dump data via FTP, adjust the existing Linux machines, and use a collector to upload them into Cloud Dataproc HDFS for storage.

D. Have the vehicles in the field continue to dump data via FTP, and adjust the existing Linux machines to immediately upload it to Cloud Storage with gsutil.

ANSWER1:

D

Notes/References1:

D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.

Reference: Streaming insertsApache Hadoop and Spark10 tips for building long running cluster using cloud dataproc

Google Certified Cloud Professional Architect is the top high paying certification in the world: Google Certified Professional Cloud Architect Average Salary - $175,761

Question 2: Today, Company A maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?

A. Execute queries against data stored in a Cloud SQL.

B. Execute queries against data indexed by vehicle_id.timestamp in Cloud Bigtable.

C. Execute queries against data stored on daily partitioned BigQuery tables.

D. Execute queries against BigQuery with data stored in Cloud Storage via BigQuery federation.

ANSWER2:

B

Notes/References2:

B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.

Reference: BigTables time series clusterBigQuery

Question 3: Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?

A. Use multiple connectivity subsystems for redundancy. 

B. Require IPv6 for connectivity to ensure a secure address space. 

C. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.

D. Use a functional programming language to isolate code execution cycles.

E. Treat every microservice call between modules on the vehicle as untrusted.

F. Use a Trusted Platform Module (TPM) and verify firmware and binaries on boot.

ANSWER3:

E and F

Notes/References3:

E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.

F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.

Reference 3: Trusted Platform Module

Question 4: For this question, refer to the Company A case study.

Which of Company A’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

A. OpEx/CapEx allocation, LAN change management, capacity planning

B. Capacity planning, TCO calculations, OpEx/CapEx allocation 

C. Capacity planning, utilization measurement, data center expansion

D. Data center expansion, TCO calculations, utilization measurement

ANSWER4:

B

Notes/References4:

B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because Company A is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.

Reference: Cloud Economics

[appbox appstore 1574395172-iphone screenshots]
[appbox googleplay com.gcpacepro.enoumen]
[appbox appstore 1560083470-iphone screenshots]
[appbox googleplay com.coludeducation.quiz]

Question 5: For this question, refer to the Company A case study.

You analyzed Company A’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?

A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

ANSWER5:

C

Notes/References5:

C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.

Question 6: Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?

A. RPM/DEB

B. Containers 

C. Chef/Puppet

D. Virtual machines

ANSWER6:

B

Notes/References6:

B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.

Question 7: Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?

A. Have each device create a persistent connection to a Compute Engine instance and write messages to a custom application.

B. Have devices poll for connectivity to Cloud SQL and insert the latest messages on a regular interval to a device specific table. 

C. Have devices poll for connectivity to Cloud Pub/Sub and publish the latest messages on a regular interval to a shared topic for all devices.

D. Have devices create a persistent connection to an App Engine application fronted by Cloud Endpoints, which ingest messages and write them to Cloud Datastore.

ANSWER7:

C

Notes/References7:

C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.

Question 8: Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?

A. Load logs into BigQuery. 

B. Load logs into Cloud SQL.

C. Import logs into Stackdriver. 

D. Insert logs into Cloud Bigtable.

E. Upload log files into Cloud Storage.

ANSWER8:

A and E

Notes/References8:

A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.

E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.

References: BigQueryStackDriverBigTableStorage Class: ColdLine

Question 9: You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?

A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. 

B. Assign a public IP to each instance, and configure a firewall rule to allow the load balancer to reach the instance public IP.

C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

ANSWER9:

C

Notes/References9:

C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.

Reference: Load balancingLoad Balancing Health Checking

Question 10: Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

A. Add each tier to a different subnetwork.

B. Set up software-based firewalls on individual VMs. 

C. Add tags to each tier and set up routes to allow the desired traffic flow.

D. Add tags to each tier and set up firewall rules to allow the desired traffic flow.

ANSWER10:

D

Notes/References10:

D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.

Reference: Using VPCRoutesAdd Remove Network

Question 11: Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?

A. Use gsutil.

B. Use gcloud.

C. Use GCS REST API. 

D. Use Storage Transfer Service.

ANSWER11:

A

Notes/References11:

A is correct because gsutil gives you access to write data to Cloud Storage.

Reference: gsutilsgcloud sdkcloud storage json apiuploading objectsstorage transfer

Question 12: You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?

A. Encrypt the message client-side using block-based encryption with a shared key.

B. Tag messages client-side with the originating user identifier and the destination user.

C. Use a trusted certificate authority to enable SSL connectivity between the client application and the server. 

D. Use public key infrastructure (PKI) to encrypt the message client-side using the originating user’s private key.

ANSWER12:

D

Notes/References12:

D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.

Question 13: You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?

A. In the source code

B. In an environment variable 

C. In a key management system

D. In a config file that has restricted access through ACLs

ANSWER13:

C

Notes/References13:

Question 14: For this question, refer to the Company B case study.

Company B wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL

B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery 

C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

ANSWER14:

B

Notes/References14:

B is correct because:
Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers.
Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices.
Cloud Pub/Sub can ingest the streaming data from the mobile users.
BigQuery can query more than 10 TB of historical data.

References: GCP QuotasBeam Apache WindowingBeam Apache TriggersBigQuery External Data SolutionsApache Hive on Cloud Dataproc

Question 15: For this question, refer to the Company B case study.

Company B has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?A. Create a scalable environment in GCP for simulating production load.B. Use the existing infrastructure to test the GCP-based backend at scale. C. Build stress tests into each component of your application and use resources from the already deployed production backend to simulate load.D. Create a set of static environments in GCP to test different levels of load—for example, high, medium, and low.

ANSWER15:

A

Notes/References15:

A is correct because simulating production load in GCP can scale in an economical way.

Reference: Load Testing iot using gcp and locustDistributed Load Testing Using Kubernetes

Question 16: For this question, refer to the Company B case study.

Company B wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Company B has the following requirements:

  • Services are deployed redundantly across multiple regions in the US and Europe
  • Only frontend services are exposed on the public internet.
  • They can reserve a single frontend IP for their fleet of services.
  • Deployment artifacts are immutable

Which set of products should they use?

A. Cloud Storage, Cloud Dataflow, Compute Engine

B. Cloud Storage, App Engine, Cloud Load Balancing

C. Container Registry, Google Kubernetes Engine, Cloud Load Balancing

D. Cloud Functions, Cloud Pub/Sub, Cloud Deployment Manager

ANSWER16:

C

Notes/References16:

C is correct because:
Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers.
Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Company B wants to expose.
Container Registry is a single place for a team to manage Docker images for the services.

References: Load Balancing https – load balancing overview GCP lb global forwarding rulesreserve static external ip addressbest practice for operating containerscontainer registrydataflowcalling https

Question 17: Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

A. Org viewer, Project owner

B. Org viewer, Project viewer 

C. Org admin, Project browser

D. Project owner, Network admin

ANSWER17:

B

Notes/References17:

B is correct because:
Org viewer grants the security team permissions to view the organization’s display name.
Project viewer grants the security team permissions to see the resources within projects.

Reference: GCP Resource Manager – User Roles

Question 18: To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?

A. Use persistent disks to store the state. Start and stop the VM as needed. 

B. Use the –auto-delete flag on all persistent disks before stopping the VM. 

C. Apply VM CPU utilization label and include it in the BigQuery billing export.

D. Use BigQuery billing export and labels to relate cost to groups. 

E. Store all state in local SSD, snapshot the persistent disks, and terminate the VM.F. Store all state in Cloud Storage, snapshot the persistent disks, and terminate the VM.

ANSWER18:

A and D

Notes/References18:

A is correct because persistent disks will not be deleted when an instance is stopped.

D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center.

References: GCP instances life cycleGCP instances set disk auto deleteGCP Local Data PersistanceGCP export data BigQueryGCP Creating Managing Labels

Question 19: Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?

A. Configure a new load balancer for the new version of the API.

B. Reconfigure old clients to use a new endpoint for the new API. 

C. Have the old API forward traffic to the new API based on the path.

D. Use separate backend services for each API path behind the load balancer.

ANSWER19:

D

Notes/References19:

D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.

References: load balancing httpsload balancing backendGCP lb global forwarding rules

Question 20: The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?

A. Increase the virtual machine’s memory to 64 GB.

B. Create a new virtual machine running PostgreSQL. 

C. Dynamically resize the SSD persistent disk to 500 GB.

D. Migrate their performance metrics warehouse to BigQuery.

ANSWER20:

C

Notes/References20:

C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.

References: GCP compute disks pdsspecsGCP Compute Disks Performances

Question 21: You need to ensure low-latency global access to data stored in a regional GCS bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Google’s Cloud CDN.

B. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

C. Do nothing.

D. Use global BigTable storage.

E. Use a global Cloud Spanner instance.

F. Migrate the data to a new multi-regional GCS bucket.

G. Change the storage class to multi-regional.

ANSWER21:

A

Notes/References21:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 22: You are building a sign-up app for your local neighbourhood barbeque party and you would like to quickly throw together a low-cost application that tracks who will bring what. Which of the following options should you choose?

A. Python, Flask, App Engine Standard

B. Ruby, Nginx, GKE

C. HTML, CSS, Cloud Storage

D. Node.js, Express, Cloud Functions

E. Rust, Rocket, App Engine Flex

F. Perl, CGI, GCE

ANSWER22:

A

Notes/References22:

The Cloud Storage option doesn’t offer any way to coordinate the guest data. App Engine Flex would cost much more to run when no one is on the sign-up site. Cloud Functions could handle processing some API calls, but it would be more work to set up and that option doesn’t mention anything about storage. GKE is way overkill for such a small and simple application. Running Perl CGI scripts on GCE would also cost more than it needs (and probably make you very sad). App Engine Standard makes it super-easy to stand up a Python Flask app and includes easy data storage options, too. 

Reference: Building a Python 3.7 App on App Engine

Question 23: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the streamed updates that follow the initial import?

A. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

B. The DynamoDB table change is captured by Cloud Pub/Sub and written to Cloud Dataflow for processing into a Spanner-compatible format.

C. Changes to the DynamoDB table are captured by DynamoDB Streams. A Lambda function triggered by the stream writes the change to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

D. The DynamoDB table is rescanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

E. The DynamoDB table is rescanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER23:

C

Notes/References23:

Rescanning the DynamoDB table is not an appropriate approach to tracking data changes to keep the GCP-side of this in synch. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The options purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

References: Cloud Solutions Architecture Reference

Question 24: Your client is a manufacturing company and they have informed you that they will be pausing all normal business activities during a five-week summer holiday period. They normally employ thousands of workers who constantly connect to their internal systems for day-to-day manufacturing data such as blueprints and machine imaging, but during this period the few on-site staff will primarily be re-tooling the factory for the next year’s production runs and will not be performing any manufacturing tasks that need to access these cloud-based systems. When the bulk of the staff return, they will primarily work on the new models but may spend about 20% of their time working with models from previous years. The company has asked you to reduce their GCP costs during this time, so which of the following options will you suggest?

A. Pause all Cloud Functions via the UI and unpause them when work starts back up.

B. Disable all Cloud Functions via the command line and re-enable them when work starts back up.

C. Delete all Cloud Functions and recreate them when work starts back up.

D. Convert all Cloud Functions to run as App Engine Standard applications during the break.

E. None of these options is a good suggestion.

ANSWER24:

E

Notes/References24:

Cloud Functions scale themselves down to zero when they’re not being used. There is no need to do anything with them.

Question 25: You need a place to store images before updating them by file-based render farm software running on a cluster of machines. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Cloud Filestore

D. Persistent Disk

ANSWER25:

C

Notes/References25:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to visual images, thus eliminating CI/CD products like Container Registry. Compute Engine is not a storage product and should be eliminated. The term “file-based” software means that it is unlikely to work well with object-based storage like Cloud Storage (or any of its storage classes). Persistent Disk cannot offer shared access across a cluster of machines when writes are involved; it only handles multiple readers. However, Cloud Filestore is made to provide shared, file-based storage for a cluster of machines as described in the question. 

Reference: Cloud Filestore | Google Cloud

Question 26: Your company has decided to migrate your AWS DynamoDB database to a multi-regional Cloud Spanner instance and you are designing the system to transfer and load all the data to synchronize the DBs and eventually allow for a quick cut-over. A member of your team has some previous experience working with Apache Hadoop. Which of the following options will you choose for the initial data import?

A. The DynamoDB table is scanned by an EC2 instance and written to an S3 bucket. Storage Transfer Service moves the data from S3 to a Cloud Storage bucket. Cloud Dataflow processes the data from Cloud Storage and writes it to Cloud Spanner.

B. The DynamoDB table data is captured by DynamoDB Streams. A Lambda function triggered by the stream writes the data to Cloud Pub/Sub. Cloud Dataflow processes the data from Cloud Pub/Sub and writes it to Cloud Spanner.

C. The DynamoDB table data is captured by Cloud Pub/Sub and written to Cloud Dataproc for processing into a Spanner-compatible format.

D. The DynamoDB table is scanned by a GCE instance and written to a Cloud Storage bucket. Cloud Dataproc processes the data from Cloud Storage and writes it to Cloud Spanner.

ANSWER26:

A

Notes/References26:

The same data processing will have to happen for both the initial (batch) data load and the incremental (streamed) data changes that follow it. So if the solution built to handle the initial batch doesn’t also work for the stream that follows it, then the processing code would have to be written twice. A Professional Cloud Architect should recognize this project-level issue and not over-focus on the (batch) portion called out in this particular question. This is why you don’t want to choose Cloud Dataproc. Instead, Cloud Dataflow will handle both the initial batch load and also the subsequent streamed data. The fact that someone on your team has previous Hadoop experience is not a good enough reason to choose Cloud Dataproc; that’s a red herring. The DynamoDB streams option would be great for the db synchronization that follows, but it can’t handle the initial data load because DynamoDB Streams only fire for data changes. The option purporting to connect Cloud Pub/Sub directly to the DynamoDB table won’t work because there is no such functionality. 

Reference: Cloud Solutions Architecture Reference

Question 27: You need a managed service to handle logging data coming from applications running in GKE and App Engine Standard. Which option should you choose?

A. Cloud Storage

B. Logstash

C. Cloud Monitoring

D. Cloud Logging

E. BigQuery

F. BigTable

ANSWER27:

D

Notes/References27:

Cloud Monitoring is made to handle metrics, not logs. Logstash is not a managed service. And while you could store application logs in almost any storage service, the Cloud Logging service–aka Stackdriver Logging–is purpose-built to accept and process application logs from many different sources. Oh, and you should also be comfortable dealing with products and services by names other than their current official ones. For example, “GKE” used to be called “Container Engine”, “Cloud Build” used to be “Container Builder”, the “GCP Marketplace” used to be called “Cloud Launcher”, and so on. 

Reference: Cloud Logging | Google Cloud

Question 28: You need a place to store images before serving them from AppEngine Standard. Which of the following options will you choose?

A. Compute Engine

B. Cloud Filestore

C. Cloud Storage

D. Persistent Disk

E. Container Registry

F. Cloud Source Repositories

G. Cloud Build

H. Nearline

ANSWER28:

C

Notes/References28:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” refers to picture files, because that’s something that you would serve from a web server product like AppEngine Standard, so we eliminate Cloud Build (which isn’t actually for storage, at all) and the other two CI/CD products: Cloud Source Repositories and Container Registry. You definitely could store image files on Cloud Filestore or Persistent Disk, but you can’t hook those up to AppEngine Standard, so those options need to be eliminated, too. The only options left are both types of Cloud Storage, but since “Cloud Storage” sits next to “Coldline” as an option, we can confidently infer that the former refers to the “Standard” storage class. Since the question implies that these images will be served by AppEngine Standard, we would prefer to use the Standard storage class over the Coldline one–so there’s our answer. 

Reference: The App Engine Standard Environment Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 29: You need to ensure low-latency global access to data stored in a multi-regional GCS bucket. Data access is uniform across many objects and relatively low. What should you do to address the latency concerns?

A. Use a global Cloud Spanner instance.

B. Change the storage class to multi-regional.

C. Use Google’s Cloud CDN.

D. Migrate the data to a new regional GCS bucket.

E. Do nothing.

F. Use global BigTable storage.

ANSWER29:

E

Notes/References29:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. But migrating the data to a regional bucket only helps when the data access will primarily be from that region. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough to get cached based on previous requests. Because the access per object is so low, Cloud CDN won’t really help. This then brings us back to the question. Now, it may seem implied, but the question does not specifically state that there is currently a problem with latency, only that you need to ensure low latency–and we are already using what would be the best fit for this situation: a multi-regional CS bucket. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 30: You need to ensure low-latency GCP access to a volume of historical data that is currently stored in an S3 bucket. Data access is uniform across many objects and relatively high. What should you do to address the latency concerns?

A. Use Premium Tier routing and Cloud Functions to accelerate access at the edges.

B. Use Google’s Cloud CDN.

C. Use global BigTable storage.

D. Do nothing.

E. Migrate the data to a new multi-regional GCS bucket.

F. Use a global Cloud Spanner instance.

ANSWER30:

E

Notes/References30:

Cloud Functions cannot be used to affect GCS data access, so that option is simply wrong. BigTable does not have any “global” mode, so that option is wrong, too. Cloud Spanner is not a good replacement for GCS data: the data use cases are different enough that we can assume it would probably not be a good fit–and it would likely be unnecessarily expensive. You cannot change a bucket’s location after it has been created–not via the storage class nor any other way; you would have to migrate the data to a new bucket. Google’s Cloud CDN is very easy to turn on, but it does only work for data that comes from within GCP and only if the objects are being accessed frequently enough. So even if you would want to use Cloud CDN, you have to migrate the data into a GCS bucket first, so that’s a better option. 

Reference: Google Cloud Storage : What bucket class for the best performance?

Question 31: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed in three regions. How many subnets will you need?

A. Six

B. One

C. Three

D. Four

E. Two

F. Nine

ANSWER31:

A

Notes/References31:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 32: You need a place to produce images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Storage

C. Persistent Disk

D. Nearline

E. Cloud Source Repositories

F. Cloud Build

G. Cloud Filestore

H. Compute Engine

ANSWER32:

F

Notes/References32:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus although they would likely be stored in the Container Registry, after being built, this question asks us where that building might happen, which is Cloud Build. Cloud Build, which used to be called Container Builder, is ideal for building container images–though it can also be used to build almost any artifacts, really. You could also do this on Compute Engine, but that option requires much more work to manage and is therefore worse. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 33: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed in three regions. How many subnets will you need?

A. Two

B. One

C. Three

D. Nine

E. Four

F. Six

ANSWER33:

D

Notes/References33:

A single subnet spans and can be used across all zones in a single region, but you will need different subnets in different regions. Also, to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers which will each need their own subnet in each of the three regions in which you will deploy this system. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 34: You need a place to store images in case any of them are needed as evidence for a tax audit over the next seven years. Which of the following options will you choose?

A. Cloud Filestore

B. Coldline

C. Nearline

D. Persistent Disk

E. Cloud Source Repositories

F. Cloud Storage

G. Container Registry

ANSWER34:

B

Notes/References34:

There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “images” probably refers to picture files, and so Cloud Storage seems like an interesting option. But even still, when “Cloud Storage” is used without any qualifier, it generally refers to the “Standard” storage class, and this question also offers other storage classes as response options. Because the images in this scenario are unlikely to be used more than once a year (we can assume that taxes are filed annually and there’s less than 100% chance of being audited), the right storage class is Coldline. 

Reference: Cloud Storage: Object Storage | Google Cloud Storage classes | Cloud Storage | Google Cloud

Question 35: You need a place to store images before deploying them to AppEngine Flex. Which of the following options will you choose?

A. Container Registry

B. Cloud Filestore

C. Cloud Source Repositories

D. Persistent Disk

E. Cloud Storage

F. Code Build

G. Nearline

ANSWER35:

A

Notes/References35:

Compute Engine is not a storage product and should be eliminated. There are several different kinds of “images” that you might need to consider–maybe they are normal picture-image files, maybe they are Docker container images, maybe VM or disk images, or maybe something else. In this question, “deploying [these images] to AppEngine Flex” lets us know that we are dealing with Docker container images, and thus they would likely have been stored in the Container Registry. 

Reference: Google App Engine flexible environment docs | Google Cloud Container Registry | Google Cloud

Question 36: You are configuring a SaaS security application that updates your network’s allowed traffic configuration to adhere to internal policies. How should you set this up?

A. Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it.

B. Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC.

C. Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC.

D. Run the application as a container in your system’s staging GKE cluster and grant it access to a read-only service account.

E. Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account.

ANSWER36:

C

Notes/References36:

You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor’s own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you’ll need to grant securityAdmin, not networkViewer. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions Understanding roles | Cloud IAM Documentation | Google Cloud

Question 37: You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend and backend tiers and will be deployed across three zones. How many subnets will you need?

A. One

B. Six

C. Four

D. Three

E. Nine

ANSWER37:

F

Notes/References37:

A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have two tiers, so you only need two subnets. 

Reference: VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

Question 38: You have been tasked with setting up a system to comply with corporate standards for container image approvals. Which of the following is your best choice for this project?

A. Binary Authorization

B. Cloud IAM

C. Security Key Enforcement

D. Cloud SCC

E. Cloud KMS

ANSWER38:

A

Notes/References38:

Cloud KMS is Google’s product for managing encryption keys. Security Key Enforcement is about making sure that people’s accounts do not get taken over by attackers, not about managing encryption keys. Cloud IAM is about managing what identities (both humans and services) can access in GCP. Cloud DLP–or Data Loss Prevention–is for preventing data loss by scanning for and redacting sensitive information. Cloud SCC–the Security Command Center–centralizes security information so you can manage it all in one place. Binary Authorization is about making sure that only properly-validated containers can run in your environments. 

Reference: Cloud Key Management Service | Google Cloud Cloud IAM | Google Cloud Cloud Data Loss Prevention | Google Cloud Security Command Center | Google Cloud Binary Authorization | Google Cloud Security Key Enforcement – 2FA

Question 39: For this question, refer to the Company B‘s case study. Which of the following are most likely to impact the operations of Company B’s game backend and analytics systems?

A. PCI

B. PII

C. SOX

D. GDPR

E. HIPAA

ANSWER39:

B and D

Notes/References39:

There is no patient/health information, so HIPAA does not apply. It would be a very bad idea to put payment card information directly into these systems, so we should assume they’ve not done that–therefore the Payment Card Industry (PCI) standards/regulations should not affect normal operation of these systems. Besides, it’s entirely likely that they never deal with payments directly, anyway–choosing to offload that to the relevant app stores for each mobile platform. Sarbanes-Oxley (SOX) is about proper management of financial records for publicly traded companies and should therefore not apply to these systems. However, these systems are likely to contain some Personally-Identifying Information (PII) about the users who may reside in the European Union and therefore the EU’s General Data Protection Regulations (GDPR) will apply and may require ongoing operations to comply with the “Right to be Forgotten/Erased”. 

Reference: Sarbanes–Oxley Act – Wikipedia Payment Card Industry Data Security Standard – Wikipedia Personal data – Wikipedia Personal data – Wikipedia

Question 40: Your new client has advised you that their organization falls within the scope of HIPAA. What can you infer about their information systems?

A. Their customers located in the EU may require them to delete their user data and provide evidence of such.

B. They will also need to pass a SOX audit.

C. They handle money-linked information.

D. Their system deals with medical information.

ANSWER40:

D

Notes/References40:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 41: Your new client has advised you that their organization needs to pass audits by ISO and PCI. What can you infer about their information systems?

A. They handle money-linked information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. Their system deals with medical information.

D. They will also need to pass a SOX audit.

ANSWER42:

A

Notes/References42:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). ISO is the International Standards Organization, and since they have so many completely different certifications, this does not tell you much. 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 43: Your new client has advised you that their organization deals with GDPR. What can you infer about their information systems?

A. Their system deals with medical information.

B. Their customers located in the EU may require them to delete their user data and provide evidence of such.

C. They will also need to pass a SOX audit.

D. They handle money-linked information.

ANSWER43:

B

Notes/References43:

SOX stands for Sarbanes Oxley and is US regulation governing financial reporting for publicly-traded companies. HIPAA–the Health Insurance Portability and Accountability Act of 1996–is US regulation aimed at safeguarding individuals’ (i.e. patients’) health information. PCI is the Payment Card Industry, and they have Data Security Standards (DSS) that must be adhered to by systems handling payment information of any of their member brands (which include Visa, Mastercard, and several others). 

Reference: Cloud Compliance & Regulations Resources | Google Cloud

Question 44: For this question, refer to the Company C case study. Once Company C has completed their initial cloud migration as described in the case study, which option would represent the quickest way to migrate their production environment to GCP?

A. Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud

B. Lift and shift all servers at one time

C. Lift and shift one application at a time

D. Lift and shift one server at a time

E. Set up cloud-based load balancing then divert traffic from the DC to the cloud system

F. Enact their disaster recovery plan and fail over

ANSWER44:

F

Notes/References44:

The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they’d then have automation to build a complete prod system in the cloud, but they’d just need to migrate to it. “Just”, right? 🙂 The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they’ve almost completed. Now, if we purely consider the kicker’s key word “quickest”, using the DR plan to fail over definitely seems like it wins. Setting up an additional load balancer and migrating slowly/carefully would take more time. 

Reference: Strangler pattern – Cloud Design Patterns | Microsoft Docs StranglerFigApplication Monolith to Microservices Using the Strangler Pattern – DZone Microservices Understanding Lift and Shift and If It’s Right For You

Question 45: Which of the following commands is most likely to appear in an environment setup script?

A. gsutil mb -l asia gs://${project_id}-logs

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute instances create –zone–machine-type=f1-micro newvm

D. gcloud compute ssh ${instance_id}

E. gsutil cp -r gs://${project_id}-setup ./install

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

ANSWER45:

A

Notes/References45:

The context here indicates that “environment” is an infrastructure environment like “staging” or “prod”, not just a particular command shell. In that sort of a situation, it is likely that you might create some core per-environment buckets that will store different kinds of data like configuration, communication, logging, etc. You’re not likely to be creating, deleting, or connecting (sshing) to instances, nor copying files to or from any instances. 

Reference: mb – Make buckets | Cloud Storage | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 46: Your developers are working to expose a RESTful API for your company’s physical dealer locations. Which of the following endpoints would you advise them to include in their design?

A. /dealerLocations/get

B. /dealerLocations

C. /dealerLocations/list

D. Source and destination

E. /getDealerLocations

ANSWER46:

B

Notes/References46:

It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it’s important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another “/list” to the endpoint. 

Reference: RESTful API Design — Step By Step Guide – By

Question 47: Which of the following commands is most likely to appear in an instance shutdown script?

A. gsutil cp -r gs://${project_id}-setup ./install

B. gcloud compute instances create –zone–machine-type=n1-highmem-16 newvm

C. gcloud compute ssh ${instance_id}

D. gsutil mb -l asia gs://${project_id}-logs

E. gcloud compute instances delete ${instance_id}

F. gsutil cp -r logs/* gs://${project_id}-logs/${instance_id}/

G. gcloud compute instances create –zone–machine-type=f1-micro newvm

ANSWER47:

F

Notes/References47:

The startup and shutdown scripts run on an instance at the time when that instance is starting up or shutting down. Those situations do not generally call for any other instances to be created, deleted, or connected (sshed) to. Also, those would be a very unusual time to make a Cloud Storage bucket, since buckets are the overall and highly-scalable containers that would likely hold the data for all (or at least many) instances in a given project. That said, instance shutdown time may be a time when you’d want to copy some final logs from the instance into some project-wide bucket. (In general, though, you really want to be doing that kind of thing continuously and not just at shutdown time, in case the instance shuts down unexpectedly and not in an orderly fashion that runs your shutdown script.)

Reference:  Running startup scripts | Compute Engine Documentation | Google Cloud Running shutdown scripts | Compute Engine Documentation | Google Cloud cp – Copy files and objects | Cloud Storage | Google Cloud gcloud compute instances | Cloud SDK Documentation | Google Cloud

Question 48: It is Saturday morning and you have been alerted to a serious issue in production that is both reducing availability to 95% and corrupting some data. Your monitoring tools noticed the issue 5 minutes ago and it was just escalated to you because the on-call tech in line before you did not respond to the page. Your system has an RPO of 10 minutes and an RTO of 120 minutes, with an SLA of 90% uptime. What should you do first?

A. Escalate the decision to the business manager responsible for the SLA

B. Take the system offline

C. Revert the system to the state it was in on Friday morning

D. Investigate the cause of the issue

ANSWER48:

B

Notes/References48:

The data corruption is your primary concern, as your Recovery Point Objective allows only 10 minutes of data loss and you may already have lost 5. (The data corruption means that you may well need to roll back the data to before that started happening.) It might seem crazy, but you should as quickly as possible stop the system so that you do not lose any more data. It would almost certainly take more time than you have left in your RPO to properly investigate and address the issue, but you should then do that next, during the disaster response clock set by your Recovery Time Objective. Escalating the issue to a business manager doesn’t make any sense. And neither does it make sense to knee-jerk revert the system to an earlier state unless you have some good indication that doing so will address the issue. Plus, we’d better assume that “revert the system” refers only to the deployment and not the data, because rolling the data back that far would definitely violate the RPO. 

Reference: Disaster recovery – Wikipedia

Question 49: Which of the following are not processes or practices that you would associate with DevOps?

A. Raven-test the candidate

B. Obfuscate the code

C. Only one of the other options is made up

D. Run the code in your cardinal environment

E. Do a canary deploy

ANSWER49:

A and D

Notes/References49:

Testing your understanding of development and operations in DevOps. In particular, you need to know that a canary deploy is a real thing and it can be very useful to identify problems with a new change you’re making before it is fully rolled out to and therefore impacts everyone. You should also understand that “obfuscating” code is a real part of a release process that seeks to protect an organization’s source code from theft (by making it unreadable by humans) and usually happens in combination with “minification” (which improves the speed of downloading and interpreting/running the code). On the other hand, “raven-testing” isn’t a thing, and neither is a “cardinal environment”. Those bird references are just homages to canary deployments.

Reference: Intro to deployment strategies: blue-green, canary, and more – DEV Community ‍‍

Question 50: Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal?

A. Object lifecycle management

B. BigQuery Slots

C. Committed use discounts

D. Sustained use discounts

E. Managed instance group autoscaling

F. Pub/Sub topic centralization

ANSWER50:

B and C

Notes/References50:

Pub/Sub usage is based on how much data you send through it, not any sort of “topic centralization” (which isn’t really a thing). Sustained use discounts can reduce costs, but that’s not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of “spending” (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as “capital” expenditures). With that in mind, GCE’s Committed Use Discounts do fit: you “buy” (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don’t use them; you’ve already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won’t pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex. 

Reference: CapEx vs OpEx: Capital Expenses and Operating Expenses Explained – BMC Blogs Sustained use discounts | Compute Engine Documentation | Google Cloud Committed use discounts | Compute Engine Documentation | Google Cloud Slots | BigQuery | Google Cloud Autoscaling groups of instances | Compute Engine Documentation Object Lifecycle Management | Cloud Storage | Google Cloud

Question 51: In your last retrospective, there was significant disagreement voiced by the members of your team about what part of your system should be built next. Your scrum master is currently away, but how should you proceed when she returns, on Monday?

A. The scrum master is the one who decides

B. The lead architect should get the final say

C. The product owner should get the final say

D. You should put it to a vote of key stakeholders

E. You should put it to a vote of all stakeholders

ANSWER51:

C

Notes/References51:

In Scrum, it is the Product Owner’s role to define and prioritize (i.e. set order for) the product backlog items that the dev team will work on. If you haven’t ever read it, the Scrum Guide is not too long and quite valuable to have read at least once, for context. 

Reference: Scrum Guide | Scrum Guides

Question 52: Your development team needs to evaluate the behavior of a new version of your application for approximately two hours before committing to making it available to all users. Which of the following strategies will you suggest?

A. Split testing

B. Red-Black

C. A/B

D. Canary

E. Rolling

F. Blue-Green

G. Flex downtime

ANSWER52:

D and E

Notes/References52:

A Blue-Green deployment, also known as a Red-Black deployment, entails having two complete systems set up and cutting over from one of them to the other with the ability to cut back to the known-good old one if there’s any problem with the experimental new one. A canary deployment is where a new version of an app is deployed to only one (or a very small number) of the servers, to see whether it experiences or causes trouble before that version is rolled out to the rest of the servers. When the canary looks good, a Rolling deployment can be used to update the rest of the servers, in-place, one after another to keep the overall system running. “Flex downtime” is something I just made up, but it sounds bad, right? A/B testing–also known as Split testing–is not generally used for deployments but rather to evaluate two different application behaviours by showing both of them to different sets of users. Its purpose is to gather higher-level information about how users interact with the application. 

Reference: BlueGreenDeployment design patterns – What’s the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow design patterns – What’s the difference between Red/Black deployment and Blue/Green Deployment? – Stack Overflow What is rolling deployment? – Definition from WhatIs.com A/B testing – Wikipedia

Question 53: You are mentoring a Junior Cloud Architect on software projects. Which of the following “words of wisdom” will you pass along?

A. Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on

B. Hiring and retaining 10X developers is critical to project success

C. A key goal of a proper post-mortem is to identify what processes need to be changed

D. Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project

E. A key goal of a proper post-mortem is to determine who needs additional training

ANSWER53:

A and C

Notes/References53:

There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front–yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future. 

Reference: 403 Forbidden 403 Forbidden Google – Site Reliability Engineering The Cone of Uncertainty

Question 54: Your team runs a service with an SLA to achieve p99 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. The next month’s SLO will be reduced.

C. Your client(s) will have to pay you extra.

D. You will have to pay your client(s).

E. There is no impact on payments.

F. There is not enough information to make a determination.

ANSWER54:

D

Notes/References54:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What’s the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 55: Your team runs a service with an SLO to achieve p90 latency of 200ms. This month, your service achieved p95 latency of 250ms. What will happen now?

A. The next month’s SLA will be increased.

B. There is no impact on payments.

C. There is not enough information to make a determination.

D. Your client(s) will have to pay you extra.

E. The next month’s SLO will be reduced.

F. You will have to pay your client(s).

ANSWER55:

B

Notes/References55:

It would be highly unusual for clients to have to pay extra, even if the service performs better than agreed by the SLA. SLAs generally set out penalties (i.e. you pay the client) for below-standard performance. While SLAs are external-facing, SLOs are internal-facing and do not generally relate to performance penalties. Neither SLAs nor SLOs are adaptively changed just because of one month’s performance; such changes would have to happen through rather different processes. A p99 metric is a tougher measure than p95, and p95 is tougher than p90–so meeting the tougher measure would surpass a required SLA, but meeting a weaker measure would not give enough information to say. 

Reference: What’s the Difference Between DevOps and SRE? (class SRE implements DevOps) – YouTube Percentile rank – Wikipedia

Question 56: For this question, refer to the Company C case study. How would you recommend Company C address their capacity and utilization concerns?

A. Configure the autoscaling thresholds to follow changing load

B. Provision enough servers to handle trough load and offload to Cloud Functions for higher demand

C. Run cron jobs on their application servers to scale down at night and up in the morning

D. Use Cloud Load Balancing to balance the traffic highs and lows

D. Run automated jobs in Cloud Scheduler to scale down at night and up in the morning

E. Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace

ANSWER56:

A

Notes/References56:

The case study notes, “Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.” Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn’t a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE–in practice, you’d just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group’s autoscaling to follow demand curves–both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over. 

Reference: Load Balancing | Google Cloud Google Cloud Platform Marketplace Solutions Cloud Functions | Google Cloud Cloud Scheduler | Google Cloud

Google Cloud Latest News, Questions and Answers online:

Cloud Run vs App Engine: In a nutshell, you give Google’s Cloud Run a Docker container containing a webserver. Google will run this container and create an HTTP endpoint. All the scaling is automatically done for you by Google. Cloud Run depends on the fact that your application should be stateless. This is because Google will spin up multiple instances of your app to scale it dynamically. If you want to host a traditional web application this means that you should divide it up into a stateless API and a frontend app.

With Google’s App Engine you tell Google how your app should be run. The App Engine will create and run a container from these instructions. Deploying with App Engine is super easy. You simply fill out an app.yml file and Google handles everything for you.

With Cloud Run, you have more control. You can go crazy and build a ridiculous custom Docker image, no problem! Cloud Run is made for Devops engineers, App Engine is made for developers. Read more here…

Cloud Run VS Cloud Functions: What to consider?

The best choice depends on what you want to optimize, your use-cases and your specific needs.

If your objective is the lowest latency, choose Cloud Run.

Indeed, Cloud Run use always 1 vCPU (at least 2.4Ghz) and you can choose the memory size from 128Mb to 2Gb.

With Cloud Functions, if you want the best processing performance (2.4Ghz of CPU), you have to pay 2Gb of memory. If your memory footprint is low, a Cloud Functions with 2Gb of memory is overkill and cost expensive for nothing.

Cutting cost is not always the best strategy for customer satisfaction, but business reality may require it. Anyway, it highly depends of your use-case

Both Cloud Run and Cloud Function round up to the nearest 100ms. As you could play with the GSheet, the Cloud Functions are cheaper when the processing time of 1 request is below the first 100ms. Indeed, you can slow the Cloud Functions vCPU, with has for consequence to increase the duration of the processing but while staying under 100ms if you tune it well. Thus less Ghz/s are used and thereby you pay less.

the cost comparison between Cloud Functions and Cloud Run goes further than simply comparing a pricing list. Moreover, on your projects, you often will have to use the 2 solutions for taking advantage of their strengths and capabilities.

My first choice for development is Cloud Run. Its portability, its testability, its openess on the libraries, the languages and the binaries confer it too much advantages for, at least, a similar pricing, and often with a real advantage in cost but also in performance, in particular for concurrent requests. Even if you need the same level of isolation of Cloud functions (1 instance per request), simply set the concurrent param to 1!

In addition, the GA of Cloud Run is applied on all containers, whatever the languages and the binaries used. Read more here…

What does the launch of Google’s App Maker mean for professional app developers?

Should I go with AWS Elastic Beanstalk or Google App Engine (Managed VMs) for deploying my Parse-Server backend?

Why can a company such as Google sell me a cloud gaming service where I can “rent” GPU power over miles of internet, but when I seek out information on how to create a version of this everyone says that it is not possible or has too much latency?

AWS wins hearts of developers while Azure those of C-levels. Google is a black horse with special expertise like K8s and ML. The cloud world is evolving. Who is the winner in the next 5 years?

What is GCP (Google Cloud Platform) and how does it work?

What is the maximum amount of storage that you could have in your Google drive?

How do I deploy Spring Boot application (Web MVC) on Google App Engine(GAE) or HEROKU using Eclipse IDE?

What are some downsides of building softwares on top of Google App Engine?

Why is Google losing the cloud computing race?

How did new products like Google Drive, Microsoft SkyDrive, Yandex.Disk and other cloud storage solutions affect Dropbox’s growth and revenues?

What is the capacity of Google servers?

What is the Hybrid Cloud platform?

What is the difference between Docker and Google App engines?

How do I get to cloud storage?

How does Google App Engine compare to Heroku?

What is equivalent of Google Cloud BigTable in Microsoft Azure?

How big is the storage capacity of Google organization and who comes second?

It seems strange that Google Cloud Platform offer “everything” except cloud search/inverted index?

Where are the files on Google Drive stored?

Is Google app engine similar to lambda?

Was Diane Greene a failure as the CEO of Google Cloud considering her replacement’s strategy and philosophy is the polar opposite?

How is Google Cloud for heavy real-time traffic? Is there any optimization needed for handling more than 100k RT?

When it comes to iCloud, why does Apple rely on Google Cloud instead of using their own data centers?

Google Cloud Storage : What bucket class for the best performance?: Multiregional buckets perform significantly better for cross-the-ocean fetches, however the details are a bit more nuanced than that. The performance is dominated by the latency of physical distance between the client and the cloud storage bucket.

  • If caching is on, and your access volume is high enough to take advantage of caching, there’s not a huge difference between the two offerings (that I can see with the tests). This shows off the power of Google’s Awesome CDN environment.
  • If caching is off, or the access volume is low enough that you can’t take advantage of caching, then the performance overhead is dominated directly by physics. You should be trying to get the assets as close to the clients as possible, while also considering cost, and the types of redundancy and consistency you’ll need for your data needs.

Top- high paying certifications:

  1. Google Certified Professional Cloud Architect – $139,529
  2. PMP® – Project Management Professional – $135,798
  3. Certified ScrumMaster® – $135,441
  4. AWS Certified Solutions Architect – Associate – $132,840
  5. AWS Certified Developer – Associate – $130,369
  6. Microsoft Certified Solutions Expert (MCSE): Server Infrastructure – $121,288
  7. ITIL® Foundation – $120,566
  8. CISM – Certified Information Security Manager – $118,412
  9. CRISC – Certified in Risk and Information Systems Control – $117,395
  10. CISSP – Certified Information Systems Security Professional – $116,900
  11. CEH – Certified Ethical Hacker – $116,306
  12. Citrix Certified Associate – Virtualization (CCA-V) – $113,442
  13. CompTIA Security+ – $110,321
  14. CompTIA Network+ – $107,143
  15. Cisco Certified Networking Professional (CCNP) Routing and Switching – $106,957

According to the 2020 Global Knowledge report, the top-paying cloud certifications for the year are (drumroll, please):

1- Google Certified Professional Cloud Architect — $175,761

2- AWS Certified Solutions Architect – Associate — $149,446

3- AWS Certified Cloud Practitioner — $131,465

4- Microsoft Certified: Azure Fundamentals — $126,653

5- Microsoft Certified: Azure Administrator Associate — $125,993

Sources:

1- Google Cloud

2- Linux Academy

3- WhizLabs

4- GCP Space on Quora

5- Udemy

6- Acloud Guru

7. Question and Answers are sent to us by good people all over the world.

What are some common reasons why a blog doesn’t rank on Google?

Subscribe

Any content destined to the public that doesn’t rank on Google or Bing is destined to be obscure and gets no visibility. Writing any blog post or article is not enough to be ranked on Google or Bing, the top 2 search engines in the world.

In this blog, we are going to describe what are some common reasons why a blog doesn’t rank on Google or Bing or Yahoo search engine.

  1. Poor content: Little or no content value
  2. Site heavy to load
  3. No tags
  4. Insecure site (no SSL certificate)
  5. Poor formatting
  6. Articles are in a very competitive space
  7. Disconnect between blog title, and content
  8. Lack of keywords or misplaced keywords: The primary keyword must be the first word of both your domain name and blog title.
  9. Malformed URLs
  10. Site Not mobile friendly
  11. No inlinks
  12. No meta-tags
  13. No Alt Tags

What to do Next?

If you resolve all the issues above, register your site to google search console,, then submit a sitemap url to google or Bing, then check your site performance and index status regularly to make sure that your site is getting indexed properly.

2022 AWS Cloud Practitioner Exam Preparation