AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

2022 AWS Cloud Practitioner Exam Preparation

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900

  • Why IT leaders choose Google Cloud certification for their teams
    by (Training & Certifications) on May 27, 2022 at 4:00 pm

    As organizations worldwide move to the cloud, it’s become increasingly crucial to provide teams with confidence and the right skills to get the most out of cloud technology. With demand for cloud expertise exceeding the supply of talent, many businesses are looking for new, cost-effective ways to keep up.When ongoing skills gaps stifle productivity, it can cost you money. In Global Knowledge’s 2021 report, 42% of IT decision-makers reported having “difficulty meeting quality objectives” as a result of skills gaps, and, in an IDC survey cited in the same Global Knowledge report, roughly 60% of organizations described a lack of skills as a cause for lost revenue. In today’s fast-paced environment, businesses with cloud knowledge are in a stronger position to achieve more. So what more could you be doing to develop and showcase cloud expertise in your organization?Google Cloud certification helps validate your teams’ technical capabilities, while demonstrating your organization’s commitment to the fast pace of the cloud.What certification offers that experience doesn’t is peace of mind. I’m not only talking about self-confidence, but also for our customers. Having us certified, working on their projects, really gives them peace of mind that they’re working with a partner who knows what they’re doing. Niels Buekers, managing director at Fourcast BVBAWhy get your team Google Cloud certified?When you invest in cloud, you also want to invest in your people. Google Cloud certification equips your teams with the skills they need to fulfill your growing business. Speed up technology implementation Organizations want to speed up transformation and make the most of their cloud investment.Nearly 70% of partner organizations recognize that certifications speed up technology implementation and lead to greater staff productivity, according to a May 2021 IDC Software Partner Survey. The same report also found that 85% of partner IT consultants agree that “certification represents validation of extensive product and process knowledge.”Improve client satisfaction and successGetting your teams certified can be the first step to improving client satisfaction and success. Research of more than 600 IT consultants and resellers in a September 2021 IDC study found that “fully certified teams met 95% of their clients’ objectives, compared to a 36% lower average net promoter score for partially certified teams.”Motivate your team and retain talentIn today’s age of the ongoing Great Resignation, IT leaders are rightly concerned about employee attrition, which can result in stalled projects, unmet business objectives, and new or overextended team members needing time to ramp up. In other words, attrition hurts.But when IT leaders invest in skills development for their teams, talent tends to stick around. According to a business value paper from IDC, comprehensive training leads to 133% greater employee retention compared to untrained teams. When organizations help people develop skills, people stay longer, morale improves, and productivity increases. Organizations wind up with a classic win-win situation as business value accelerates. Finish your projects ahead of scheduleWith your employees feeling supported and well equipped to handle workloads, they can also stay engaged and innovate faster with Google Cloud certifications. “Fully certified teams are 35% more likely than partially certified teams to finish projects ahead of schedule, typically reaching their targets more than two weeks early,” according to research in an IDC InfoBrief.Certify your teamsGoogle Cloud certification is more than a seal of approval – it can be your framework to increase staff tenure, improve productivity, satisfy your customers, and obtain other key advantages to launch your organization into the future. Once you get your teams certified, they’ll join a trusted network of IT professionals in the Google Cloud certified community, with access to resources and continuous  learning opportunities.To discover more about the value of certification for your team, download the IDC paper today and invite your teams to join our upcoming webinar and get started on their certification journey.Related ArticleHow to become a certified cloud professionalHow to become a certified cloud professionalRead Article

  • GETTING THIS ERROR DEPLOYING FUNCTION WHAT WILL DO WNYONE TELL ME
    by /u/CutEnvironmental3615 (Google Cloud Platform Certification) on May 27, 2022 at 12:06 pm

    submitted by /u/CutEnvironmental3615 [link] [comments]

  • New courses and updates from AWS Training and Certification in May 2022
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 24, 2022 at 4:25 pm

    Check out news and updates from AWS Training and Certification for cloud learners, AWS customers, and AWS Partners for May 2022. New digital courses focus on cloud essentials, networking basics, compute, container management, and audit activities. Classroom training also is available for learning about securing workloads on the AWS Cloud and building a data warehousing solution, and there are certification updates for Advanced Networking – Specialty, Solutions Architect – Professional, and SAP on AWS – Specialty . . .

  • Public preview: Azure Communication Services APIs in US Government cloud
    by Azure service updates on May 24, 2022 at 4:00 pm

    Use Azure Communication Services APIs for voice, video, and messaging in US Government cloud.

  • Microsoft AZ-800 (Administering Windows Server Hybrid Core Infrastructure)
    by /u/No-Energy2718 (Microsoft Azure Certifications) on May 24, 2022 at 11:41 am

    submitted by /u/No-Energy2718 [link] [comments]

  • SC-100 Study Cram
    by /u/JohnSavill (Microsoft Azure Certifications) on May 24, 2022 at 11:03 am

    submitted by /u/JohnSavill [link] [comments]

  • AZ-104 Passed!
    by /u/Walkipedia00 (Microsoft Azure Certifications) on May 23, 2022 at 9:15 pm

    Passed the test last night, barely got there but a pass is a pass! I used Savill Videos and SkillCertPro tests mainly. Also did a bit on Udemy with Scott Duffy. Test was a lot like SkillCertPro, but all of Savill's videos really give you a good understanding of the material. I probably only studied 20-25 hours total so I would recommend more than that before taking the exam. Good luck to everyone who hasn't taken it yet!! submitted by /u/Walkipedia00 [link] [comments]

  • Az-700 Objectives notion template
    by /u/benj_crew (Microsoft Azure Certifications) on May 23, 2022 at 8:10 pm

    Hi all, Whenever I sit an exam, I like to create a little checklist of all the exam objectives. It works really well to tick them off after the bulk of my study is done. That way I can focus on what I don't know. Anyway, people seemed to like the one I made for Az-104: https://www.reddit.com/r/AzureCertification/comments/twqicc/az104_objectives_notion_template/ ​ So here is my new checklist for Az-700: https://fork-psychology-8fe.notion.site/3daf733b15954ba6a298f8ead828fc2b?v=697c9e170c8747f9906da8b90aaa712f Just click 'duplicate' in the top-right corner to copy it to your own Notion. ​ Best of luck to you all. submitted by /u/benj_crew [link] [comments]

  • Passed Az-104
    by /u/InsaneMethod (Microsoft Azure Certifications) on May 23, 2022 at 5:30 pm

    Whew! That was a tough one. Used Savill videos, Microsoft learn, MeasureUp, and my hands on experience with Azure at the MSP I work at for the past 6 months. About 50 hours of studying in total. Will probably go for the AZ-700 next as networking is my strong suite. I wish everyone luck on their exam try’s! submitted by /u/InsaneMethod [link] [comments]

  • Azure certification path for someone with no experience coding?
    by /u/FriendlyBrownMan (Microsoft Azure Certifications) on May 23, 2022 at 4:35 pm

    I work in helpdesk currently. My job is considering moving from on Prem to cloud (Azure). I have already decided to start with the Azure fundamentals cert (my company is offering to pay for it if I pass, as they would with any certifications) After I pass this exam, what should be my next certification? I am looking to move out of help desk and move onto a role where I can help manage our cloud infrastructure. Last year I passed the AWS SAA, tried applying to jobs for 6 months with no luck because of experience. In this situation I’ll get real on the job experience with Azure so I’m going for it. Please help me find the right path. Thanks! submitted by /u/FriendlyBrownMan [link] [comments]

  • passed az900 yesterday.
    by /u/DontDoIt2121 (Microsoft Azure Certifications) on May 23, 2022 at 3:24 pm

    thinking of doing az500 next since i watched the lectures last week during ms security week and have access to the labs. any reason why i should study az104 1st or just go for az500 next weekend? will be on to az104 in the next month regardless and hopefully az305 the following month. submitted by /u/DontDoIt2121 [link] [comments]

  • Renewal dates
    by /u/mrM1975 (Microsoft Azure Certifications) on May 23, 2022 at 7:20 am

    I received an Az cert renewal from Microsoft today. The cert expires Nov22. If I took the exam and passed today (May22) when would my next renewal be due? Would it be May24 or Nov24? I can't find anything on here about it: https://docs.microsoft.com/en-us/learn/certifications/renew-your-microsoft-certification TIA submitted by /u/mrM1975 [link] [comments]

  • New to Azure - Mentor or guide needed
    by /u/RefrigeratorWorried3 (Microsoft Azure Certifications) on May 23, 2022 at 5:40 am

    i am totally new to Azure , need some tips on where to start and how to proceed with certification. i have a moderate level programming experience. submitted by /u/RefrigeratorWorried3 [link] [comments]

  • AZ900 free practice tests?
    by /u/greyskull57 (Microsoft Azure Certifications) on May 22, 2022 at 6:38 pm

    Hi Guys, Is there any way to get paid AZ900 practice tests for free Atleast I don't feel like putting money for AZ900, just wanna try them and go for AZ104 paid. submitted by /u/greyskull57 [link] [comments]

  • I have passed the MS-900. Where Do I Go From Here for SOC route.
    by /u/Moynzy (Microsoft Azure Certifications) on May 22, 2022 at 4:39 pm

    Hello all, ​ My workplace uses M365 and I was asked to pass the MS-900. Where do I go from here? There are internal positions such as level 2 Azure Engineers, or SOC Analyst roles. I want to follow the SOC route, so do I study for the SC-900 or do I study the AZ-900? ​ Thanks for the support. Edit: thanks all Booked the SC-900 for June 11th and the Az-900 July 30th. Wish me luck submitted by /u/Moynzy [link] [comments]

  • Recommended Training for the MD-100
    by /u/creatureshock (Microsoft Azure Certifications) on May 22, 2022 at 3:43 pm

    Hello all. Not sure this is the right place, but I'm taking a shot in the dark. I have to take the MD-100 for a new job and I'm having a hard time finding decent training for it. I've used the free training Microsoft and the John Christopher training on Udemy, but I'm failing the Measureup practice tests that Microsoft/PearsonVUE sold along with the test. The practice tests keep asking questions that aren't covered in any of the training. Does anyone have any recommendations? submitted by /u/creatureshock [link] [comments]

  • PEARSON VUE RUINED MY EXAM
    by /u/CarefulArtichoke7768 (Microsoft Azure Certifications) on May 22, 2022 at 11:50 am

    Been revising for 2 months for the SC-200 exam, given up weekends to sudy.... Exam day came and my exam didnt load in properly, the questions were there but no where to fill in my answers. I spoke to 4 different customer service people who tried restting my exam and it didnt work.... Ive now been told they need to reveiw the case and it will can take a week for them to get back to me..... Really have theown off all the prep i have done and feel massively deflated now, the customer service team that were speaking to me were a joke, took forever to get a single respond back from them. submitted by /u/CarefulArtichoke7768 [link] [comments]

  • Free CISSP, Security+, SC-900, AZ-900
    by /u/rj4511 (Microsoft Azure Certifications) on May 22, 2022 at 10:47 am

    submitted by /u/rj4511 [link] [comments]

  • AZ-204 PowerShell Commands
    by /u/IS2020 (Microsoft Azure Certifications) on May 22, 2022 at 3:31 am

    Preparing for the 204, and I noticed that the Microsoft learning paths use Azure CLI commands for the hands on portions. Is it enough to be fluent in CLI, or must you know the PowerShell equivalents for the exam? Thanks! submitted by /u/IS2020 [link] [comments]

  • How long did it take you to start learning a course to getting your certs?
    by /u/undertheinfluenceof (Microsoft Azure Certifications) on May 20, 2022 at 3:27 pm

    Just out of total curiosity, how long did it take any of you to begin studying for, say, AZ-900 to passing the final exam? (Does not have to be AZ-900, but any of your certs.) I am totally curious about time-frames and what a realistic expectation it should take. I also know timelines will vary depending on experience. submitted by /u/undertheinfluenceof [link] [comments]

  • How to create labs for AZ-104?
    by /u/Real_Lemon8789 (Microsoft Azure Certifications) on May 20, 2022 at 3:25 pm

    I have an Azure Developer tenant that includes Microsoft licensing, but does not include any subscription credits. I had opened a free Azure trial last year with a different tenant and never used it. So, the $200 subscription credit expired and got wasted in the first 30 days. Is there another way to get a 30 day trial of subscription credits and apply it to the developer tenant I have now or else create a separate tenant with $200 credit to use for AZ-104 test prep? I was also planning to use GitHub labs related to AZ-104, but I don’t understand how to use that. I went to the GitHub page and I just see links on the page to a bunch of files. I don’t see any explanation of what to do with all those file links. submitted by /u/Real_Lemon8789 [link] [comments]

  • Cert suggestions?
    by /u/SENDMEYOURROBOTDICKS (Microsoft Azure Certifications) on May 20, 2022 at 8:22 am

    Hey guys, I was wondering if anyone could suggest which certs i should try to tackle after getting AZ-900. A bit about myself: I have about 8 years of experience as a support engineer with mostly on-premises experience with Windows Servers, Microsoft Exchange. I've had some entry-level experience with networking and i haven't done much automation yet. I'm planning to fill that gap in and start using Powershell, Python and move towards a role that resembles an SRE. Is it worth getting the AZ-104 certification, or should i move directly towards AZ-204 and AZ-400? submitted by /u/SENDMEYOURROBOTDICKS [link] [comments]

  • 05.2022 Free Voucher or Discounts
    by /u/TrySmile (Microsoft Azure Certifications) on May 20, 2022 at 7:01 am

    Free Voucher: https://www.microsoft.com/en-us/cloudskillschallenge/build/registration/2022 ithttps://www.microsoft.com/en-rs/trainingdays?activetab=ms-training-days:primaryr4 Discounts: https://developer.microsoft.com/en-us/offers/30-days-to-learn- submitted by /u/TrySmile [link] [comments]

  • Passed AZ-900 at a Pearson Vue center
    by /u/gotopune (Microsoft Azure Certifications) on May 20, 2022 at 3:57 am

    Today was my first attempt at AZ-900 and I passed (scores 805). I took the test at a Pearson Vue center because it was under a mile away from my place, so I figured why not 😉 A few things to note- I prepared using the material on MS website. I also had access to ESI portal (from work), and I took the practice test there. I did not use any YouTube material. I don’t have a lot of hands on experience but in my prep test I noticed one question had a screenshot of the Azure portal and asked us to identify what one needs to select to perform an action. Therefore, I went through the Azure portal and studied the names of each of the icons. Sure enough I got 2 questions in the test, so that prep helped. I read here on this sub that the exam had changed in May. I didn’t know what changed and I can’t tell you if I noticed something. But there sure was an option to take a break. The condition specified that after we take a break, we’re not allowed to go back to any of the questions we’ve answered already. Hope this helps people looking to take the test soon. Please definitely go through the Microsoft learning portal and get familiar with some of the options on Azure portal. submitted by /u/gotopune [link] [comments]

  • AZ-104 Skillcertpro Question Confusion.
    by /u/Visual_Classic_7459 (Microsoft Azure Certifications) on May 20, 2022 at 12:59 am

    Hey guys, so I am currently studying for az-104 and i have come across what I think are conflicting questions but I think one of them is apart of a much larger question as it pertains to SLA availability and in one picture the question only has to do with the scale set where as the other you see 2 questions with one talking about a scale set and the other talking about an availability set. Please tell me if the correct answers are actually correct because I have done some research but cant find a concrete answer. Also the reason I say " I think one of them is apart of a much larger question" is because I have now taken the az-104 exam twice and I noticed that questions like this tend to be a two in one deal with dropdown boxes with the possible answers. Please tell me if I am wrong in any and everything I have said. Thanks. https://preview.redd.it/4u7av0c65j091.jpg?width=1186&format=pjpg&auto=webp&s=2ebab588a854e4ef01963a7ed64d503a3d7708d3 https://preview.redd.it/lex4s9c65j091.jpg?width=899&format=pjpg&auto=webp&s=4e7910d75423fed558730fe57bab102e836f2ebd submitted by /u/Visual_Classic_7459 [link] [comments]

  • Azure Terrafy- Azure’s best Terraform buddy
    by /u/ormamag (Microsoft Azure Certifications) on May 19, 2022 at 8:41 pm

    submitted by /u/ormamag [link] [comments]

  • Thoughts on A Cloud Guru for Azure?
    by /u/Drewskiii727 (Microsoft Azure Certifications) on May 19, 2022 at 5:27 pm

    They are having a 40% off sale for the year, would you recommend I get this? Trying to learn Azure and coming from a non-tech background. Hoping to make a career switch to tech in the future. submitted by /u/Drewskiii727 [link] [comments]

  • New Research shows Google Cloud Skill Badges build in-demand expertise
    by (Training & Certifications) on May 19, 2022 at 4:00 pm

    We live in a digital world, and the future of work is in the cloud. In fact, 61% of HR professionals believe hiring developers will be their biggest challenge in the years ahead.1During your personal cloud journey, it’s critical to build and validate your skills in order to evolve with the rapidly changing technology and business landscape.That is why we created skill badges - a micro-credential issued by Google Cloud to demonstrate your cloud competencies and your commitment to staying on top of the latest Google Cloud solutions and products. To better understand the value of skills badges to holders’ career goals, we commissioned a third-party research firm, Gallup, to conduct a global study on the impact of Google Cloud skill badges. Skill badge earners overwhelmingly gain value from and are satisfied with Google Cloud skill badges.Skill badge holders state that they feel well equipped with the variety of skills gained through skill badge attainment, that they are more confident in their cloud skills, are excited to promote their skills to their professional network, and are able to leverage skill badges to achieve future learning goals, including a Google Cloud certification. 87% agree skill badges provided real-world, hands-on cloud experience286% agree skill badges helped build their cloud competencies2 82% agree skill badges helped showcase growing cloud skills290% agree that skill badges helped them in their Google Cloud certification journey274% plan to complete a Google Cloud certification in the next six months2Join thousands of other learners and take your career to the next level with Google Cloud skill badges.To learn more, download the Google Cloud Skills Badge Impact Report at no cost.1. McKinsey Digital,Tech Talent Technotics: Ten new realities for finding, keeping, and developing talent , 20222. Gallup Study, sponsored by Google Cloud Learning: "Google Cloud Skill Badge Impact report", May 2022Related ArticleHow to prepare for — and ace — Google’s Associate Cloud Engineer examThe Cloud Engineer Learning Path is an effective way to prepare for the Associate.Read Article

  • AZ900!!!!
    by /u/Mildew69 (Microsoft Azure Certifications) on May 19, 2022 at 3:09 pm

    Give me your best advice, experience, and study suggestions for my AZ-900 test next Wednesday afternoon. I have 10+ years in IT and am familiar with the cloud, but not proficient. ​ Thanks yall. This r/ has been super helpful thus far. submitted by /u/Mildew69 [link] [comments]

  • Study group for AZ-104
    by /u/Dom_thedestroyer (Microsoft Azure Certifications) on May 19, 2022 at 11:56 am

    Looking for anyone who wants or are already studying for the 104 exam. I’m planning on taking it in June, so I’m looking for someone to bounce ideas of of. Lmk submitted by /u/Dom_thedestroyer [link] [comments]

  • Top five reasons AWS Partners should take AWS Training
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 16, 2022 at 4:27 pm

    Are you new to an Amazon Web Services (AWS) Partner business and the cloud? Not sure where to start your cloud learning journey? It may feel daunting but AWS offers Partner-exclusive courses to make it easier to understand cloud fundamentals. In fewer than 30 minutes, you can begin boosting your confidence and credibility with both customers and your organization . . .

  • When Artificial Intelligence becomes more than a passion
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on May 5, 2022 at 6:01 pm

    Learn how AWS Certifications can help you validate your knowledge and enhance your credibility. Dipayan Das updated his artificial intelligence (AI) skills with AWS Training and Certification. He shares the resources he used and the impact of his training, including his ability to add value to his organization and clients. . .

  • If you are looking for a Job relating to azure try r/AzureJobs
    by /u/whooyeah (Microsoft Azure Certifications) on May 5, 2022 at 10:41 am

    submitted by /u/whooyeah [link] [comments]

  • GCP Certification missing certificates
    by /u/ProtossforAiur (Google Cloud Platform Certification) on May 2, 2022 at 8:31 am

    These certifications are a scam. They will provide you with a link of the certificate after that they can remove the link whenever they want. If you get certified make sure you download in pdf. Google doesn't keep backup of certificates. Yes you heard that right.we asked a copy of certification which was because the link was not working they replied they couldn't submitted by /u/ProtossforAiur [link] [comments]

  • How we’re keeping up with the increasing demand for the Google Workspace Administrator role
    by (Training & Certifications) on April 29, 2022 at 4:00 pm

    We’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. To mark the moment, we sat down with Erik Geerdink from SADA to talk about how the Google Workspace Administrator role and demand for this skill set has changed over the years. Erik is a Deployment Engineer and Pod Lead. He holds a Professional Google Workspace Administrator Certificationand has worked with Google Workspace for more than six years.What was it like starting out as a Google Workspace Administrator?When I first started, I was doing Google Workspace Support as a Level 2 Administrator. At that time, there were fewer admin controls for Google Workspace. There were calendar issues, some mail routing issues, maybe a little bit of data loss prevention (DLP), but that was about it.About 5 years ago, I transferred into Google Deployment and really got to see all that went on with deploying Google Workspace and troubleshooting advanced issues. Since then, what you can accomplish in the admin console has really taken off. There’s still Gmail and Calendar configurations, but the security posture that Google offers now—they’ve really upped their game. The extent of DLP isn’t just Gmail and Drive anymore; it extends into Chat. And we’re doing a lot of Context-Aware Access to make sure users only have as much access as IT compliance allows in our deployments. Calendar interop, which allows users in different systems to see availability, has been a big area of focus as well.How has the Google Workspace Administrator role changed over the last few years? It used to be that you were a systems admin who also took care of the Google portion as well. But with Google Workspace often being the entry point to Google Cloud, we’ve had to become more knowledgeable about the platform as a whole. Now, we not only do training with Google Workspace admins for our projects, we also talk to their Google Cloud counterparts as well.Google Workspace is changing all the time, and the weekly updates that Google sends out are great. As an engineering team, every week on Wednesday, we review each Google Workspace update that’s come out to understand how they affect us, our clients, and our upcoming projects. There’s a lot to it. It’s not just a little admin role anymore. It’s a strategic technology role.What motivated you to get Google Cloud Certified?I spent the first 15 years of my career doing cold server room roles, and I knew I had to get cloudy. I wanted to work with Google, and it was a no-brainer given the organization’s reputation for innovation. I knew this certification exam was the one to get me in the door. The Professional Google Workspace Administrator certification was required to level up as an administrator and to make sure our business kept getting the most out of Google Workspace. How has the demand for certified Google Workspace Admins changed recently? Demand has absolutely gone up. We are growing so much, and we need more professionals with this certification. It’s required for all of our new hires. When I see a candidate that already has the certification, they go to the top of the list. I’ll skip all the other resumes to find someone who has this experience. We’re searching globally—not just in North America—to find the right people to fill this strategic role.Explore the new learning pathIn order to keep up with the changing demands of this role, we’ve rebranded the Professional Collaboration Engineer Certification to the Professional Google Workspace Administrator Certification and updated the learning path. The learning path now aligns with the improved admin console. We’ve replaced the readings with videos for a better learning experience: in total, we added 17 new videos across 5 courses to match new features and functionality. Earn the Professional Google Workspace Administrator Certification to distinguish yourself among your peers and showcase your skills.Related ArticleUnlock collaboration with Google Workspace EssentialsIntroducing Google Workspace Essentials Starter, a no-cost offering to bring modern collaboration to work.Read Article

  • How one learner earned four AWS Certifications in four months
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 28, 2022 at 4:16 pm

    Ever wonder what it takes to earn an AWS Certification? Imagine earning four in four months. Rola Dali, a senior software developer at Local Logic, shares her experience and insights about challenging herself to do just that. She breaks down the resources she found most helpful and her overall motivation to invest in her cloud learning journey . . .

  • Build your cloud skills with no-cost access to Google Cloud training on Coursera
    by (Training & Certifications) on April 28, 2022 at 4:00 pm

    Attracting talented individuals with cloud skills is critical to success, as organizations continue to adopt and optimize cloud technology. The lack of cloud expertise and experience is a top and growing challenge for businesses as they expand their cloud footprint and search for skilled talent. To help meet this need, we are now offering access to over 500 Google Cloud self-paced labs made available on Coursera. A selected collection of the most popular self-paced labs, known as projects, are available at no cost for one month from April 28 - May 29, 2022. Learners can choose their preferred format to claim one month free access to either a top Google Cloud Project, course, Specialization or Professional Certificate.What is a lab?A lab is a learning experience where you complete a scenario based use case by following a set of instructions in a specified amount of time in an interactive hands-on environment. Labs are completed in the real Google Cloud Console and other Google Cloud products using temporary credentials, as opposed to a simulation or demo environment and take 30 - 90 minutes to complete (depending on difficulty level). Our goal is to enable you to apply your new skills and be effective immediately in real-world cloud technology settings.Many of these labs, known in Coursera as projects, include a variety of tasks and activities for you to choose from to best fit your needs. Combine bite-size individual labs to create a personalized set of learning and upskilling with clear application in a sandbox environment. Labs are available for all skill levels, and cover a wide range of topics:Cloud essentialsCloud engineering and architectureMachine learningData analytics and engineeringDevOpsHere is a roundup of some popular and trending labs right now:Getting Started with Cloud Shell and gcloudKubernetes Engine: Qwik StartIntroduction to SQL for BigQuery and Cloud SQLMigrating a Monolithic Website to Microservices on Google Kubernetes EngineGet a feel for the lab experienceCreating a Virtual Machine is one of our most popular labs, taking place directly in Google Cloud Console. In this beginner level project, you will learn how to create a Google Compute Engine virtual machine and understand zones, regions and machine types. It takes 40 minutes to complete and you’ll earn a shareable certificate.As an example of more advanced content, Predict Baby Weight with TensorFlow on AI Platformrequires experience to train, evaluate and deploy a machine learning model to predict a baby’s weight. The lab activities are completed in a real cloud environment, not in a simulation or demo environment. It takes 90 minutes to complete and you will earn a shareable certificate.Kick off your no-cost learning journey todayFor direct access to self-paced labs, we recommend that you get started by taking a look at Coursera’s Collection Page, where you can browse labs/projects by our most popular topics, or explore the full catalog to find the cloud projects that are right for your career goals by browsing Google Cloud ‘projects’ on Coursera.The month of free Google Cloud learning on Coursera is available from April 28 - May 29, 2022, so join us to evolve your skill set and cloud knowledge.Ready to start your learning Google Cloud at no-cost for 30 days? Sign uphere.Related ArticleTraining more than 40 million new people on Google Cloud skillsTo help more than 40 million people build cloud skills, Google Cloud is offering limited time no-cost access to all training contentRead Article

  • Learn to build batch analytics solutions with new AWS classroom course
    by Kumar Kumaraguruparan (AWS Training and Certification Blog) on April 27, 2022 at 4:00 pm

    Learn more about our new AWS intermediate-level course, Building Batch Data Analytics Solutions on AWS. If you are a data engineer or data architect who builds data analytics pipelines with open-source analytics frameworks, such as Apache Hadoop or Apache Spark, this one-day, virtual classroom course will help you develop these skills. You’ll learn to build a modern data architecture using Amazon EMR, an enterprise-grade Apache Spark, and Apache Hadoop managed service . . .

  • New courses and updates from AWS Training and Certification in April
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 26, 2022 at 4:20 pm

    New Amazon Web Services (AWS) Training and Certification courses and offerings for cloud learners, AWS customers, and AWS Partners for April 2022. New digital offerings include fundamental and intermediate courses that focus on SAP, managing game workloads, designing blockchain solutions, Amazon Connect, AWS storage and databases, and evaluating migration scenarios. And if you’re interested in building a batch data analytics solution on AWS, there’s a new intermediate-level classroom course . . .

  • 3 tier application gcp terraform code
    by /u/savetheQ (Google Cloud Platform Certification) on April 25, 2022 at 7:48 pm

    Hi folks, anyone has some sample git for 3 tier application gcp terraform code. submitted by /u/savetheQ [link] [comments]

  • Professional Cloud Architect - materials recommendations needed.
    by /u/theGrEaTmPm (Google Cloud Platform Certification) on April 24, 2022 at 10:56 am

    Hi, What materials did you use when preparing for Professional Cloud Architect? Do you have any proven materials? How much time did you spend getting ready for the exam? Thanks in advance for your help. submitted by /u/theGrEaTmPm [link] [comments]

  • Bouncing back: shifting from hospitality to cloud
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 22, 2022 at 4:31 pm

    Hear directly from AWS re/Start graduate, Antonio O'Donnell, about his experience with the reskilling program. AWS re/Start is a full-time, classroom-based skills training program that prepares professionals for cloud-based careers.

  • How to prepare for — and ace — Google’s Associate Cloud Engineer exam
    by (Training & Certifications) on April 22, 2022 at 4:00 pm

    Do you want to get out of the server room and into the cloud? Now’s the time to sign up for our Cloud Engineer Learning Path — now with the newly refreshed Preparing for the Associate Cloud Engineer certification course — and start working toward your Associate Cloud Engineer certification. Earning your Associate Cloud Engineer certification sends a strong signal to potential employers about what you can accomplish in Google Cloud. Associate Cloud Engineers can deploy and secure applications and infrastructure, maintain enterprise solutions to ensure they meet performance metrics, and monitor the operations of multiple projects in the cloud. Associate Cloud Engineers have also demonstrated that they can use the Google Cloud Console and the command-line interface to maintain and scale deployed cloud solutions that leverage Google-managed or self-managed services on Google Cloud.Many Associate Cloud Engineers come from the on-premises world of racking and stacking servers and are ready to upgrade their skills to the cloud era. Achieving an Associate Cloud Engineer certification is a great step towards growing a career in IT, opening you up to become a cloud developer or architect, cloud security engineer, cloud systems engineer, or network engineer, among others.The Associate Cloud Engineer learning pathBefore attempting the Associate Cloud Engineer exam, we recommend that you have 6+ months hands-on experience with Google Cloud products and solutions. While you’re gaining that experience, a good way to enhance your preparation is to follow the Cloud Engineer Learning Path, which consists of on-demand courses, hands-on labs, and the opportunity to earn skill badges. Here are our recommended steps:1. Understand what’s on the exam: Review the exam guide to determine if your skills align with the topics on the exam.2. Create your study plan with the Preparing for Your Associate Cloud Engineer Journey: This course helps you structure your preparation for the Associate Cloud Engineer exam. You will learn about the Google Cloud domains covered by the exam and how to create a study plan to improve your domain knowledge.3. Start preparing:  Follow the Cloud Engineer learning path, where you’ll dive into Google Cloud services such as Compute Engine, Google Kubernetes Engine, App Engine, Cloud Storage, Cloud SQL, and BigQuery. 4. Earn skills badges: Demonstrate your growing Google Cloud skills by sharing your earned skill badges along the way. Skill badges that will help you prepare for the Associate Cloud Engineer certification include:Perform Foundational Infrastructure Tasks in Google CloudAutomating Infrastructure on Google Cloud with TerraformCreate and Manage Cloud ResourcesSet Up and Configure a Cloud Environment in Google Cloud5. Review additional resources: Test your knowledge with some sample exam questions here.6. Certify: Finally, register for the exam and select whether to take it remotely or at a nearby testing center. Start your prep to become an Associate Cloud Engineer Take the next step towards becoming a cloud engineer and develop the recommended hands-on experience by earning the recommended skill badges. Register here and get 30 days free access to the cloud engineer learning path on  Google Cloud Skills Boost!Related ArticleThis year, resolve to become a certified Professional Cloud Developer – here’s howFollow this Google Cloud Skills Boost learning path to help you earn your Google Cloud Professional Developer certification.Read Article

  • New to GCP and looking for a study group!
    by /u/sulliv16 (Google Cloud Platform Certification) on April 19, 2022 at 4:15 pm

    As the title states, I am starting my venture into GCP and would love to get connected with a few people to help with accountability and share insight as we learn! I have around 3 years working with AWS and have my solutions architect professional and security specialty very there. I know next to nothing about GCP, but am very familiar with cloud concepts and it has been my work focus the past 2 years. Let me know if you would interested to link up and start learning together! Thanks all submitted by /u/sulliv16 [link] [comments]

  • GCP Professional Cloud Architect Certification Blog.
    by /u/HamanSharma (Google Cloud Platform Certification) on April 17, 2022 at 12:24 am

    Check out the preparation guide for GCP Cloud Architect Certification with tips and resources - https://blog.reviewnprep.com/gcp-cloud-architect. Hope this helps everyone preparing for this certification. submitted by /u/HamanSharma [link] [comments]

  • Introducing the Professional Cloud Database Engineer certification
    by (Training & Certifications) on April 12, 2022 at 3:00 pm

    Today, we’re pleased to announce the new Professional Cloud Database Engineer certification, in beta, to help database engineers translate business and technical requirements into scalable and cost-effective database solutions. By participating in the beta, you will directly influence and enhance the learning and career path for other Cloud Database Engineers. And upon passing the exam, you will become one of the first Google Cloud Certified Cloud Database Engineers in the industry. The cloud database space is evolving rapidly with the worldwide cloud database market projected to reach $68.5 billion by 2026. As more databases move to fully managed cloud database services, the traditional database engineer is now being tasked to handle more nuanced and advanced functions. In fact, there is a massive need for database engineers to lead strategic decision-making and distinguish themselves with a more developed and advanced skill set than what the industry previously called for. Why the certification is importantCloud Database Engineers are critical to the success of your organization and that’s why this new certification from Google Cloud is so important. These engineers are uniquely skilled at designing, planning, testing, implementing, and monitoring databases including migration processes. Additionally, they provide the right guidance about which databases are best for a company’s specific use cases and they’re able to guide developers when making decisions about which databases to use when building applications. These engineers lead migration efforts while ensuring customers are getting the most out of their database investment.  This new certification will validate a developer’s ability to: Design scalable cloud database solutionsManage a solution that can span multiple databasesPlan and execute on database migrationsDeploy highly scalable databases in Google CloudBefore your exam, be sure to check out the exam guide to familiarize yourself with the topics covered, and round out your skills by following the Database Engineer Learning Path which includes online training, in-person classes, hands-on labs, and additional resources to help you prepare for your exam. I am excited to welcome you to the program. Sign up now and save 40% on the cost of the certification.Related ArticleGoogle Cloud’s key investment areas to accelerate your database transformationThis blog focuses on the 6 key database investment areas that help you accelerate your digital transformation journey.Read Article

  • Now accepting applications for the AWS AI & ML Scholarship program
    by Anastacia Padilla (AWS Training and Certification Blog) on April 11, 2022 at 5:09 pm

    Calling all high school and college students who are at least 16 years old and underserved or underrepresented in tech globally – we invite you to apply for the Amazon Web Services (AWS) Artificial Intelligence (AI) & Machine Learning (ML) Scholarship Program. The AWS AI & ML Scholarship Program, in collaboration with Intel and Udacity, will launch this summer seeking to inspire, motivate, and educate students about AI and ML to nurture a diverse workforce of the future . . .

  • Using a scientific thought process to improve customer experiences
    by Marwan Al Shawi (AWS Training and Certification Blog) on April 8, 2022 at 4:01 pm

    Learn approaches and tips to better understand the needs of the end customer by asking the right questions. This blog shares two techniques of critical thinking to help you break down a complex scenario into smaller parts, allowing you to better analyze the situation and take appropriate action . . .

  • Announcing new certification: AWS Certified: SAP on AWS – Specialty
    by Training and Certification Blog Editor (AWS Training and Certification Blog) on April 7, 2022 at 8:41 pm

    We're introducing our latest AWS Certification, AWS Certified: SAP on AWS - Specialty certification. Showcase your expertise in designing, implementing, migrating, and operating SAP workloads on AWS. Register today!

  • Train your organization on Google Cloud Skills Boost
    by (Training & Certifications) on April 7, 2022 at 1:00 pm

    Enterprises are moving to cloud computing at an accelerated pace, estimating that 85% of enterprises will adopt a cloud first principle by 2025 (Gartner®, Gartner says Cloud will be the Centerpiece of the New Digital Experience, Laurence Goasduff, November 10, 2021). There are countless reasons why enterprises are moving to the cloud - from reduced IT costs and increased scalability, to improved security and efficiency. However this rapid change has presented a challenge - how will organizations build the skills they need to accelerate cloud adoption within their organization? The answer is comprehensive training. We commissioned IDC in March 2022 , an independent market intelligence firm, to write a white paper that studied the impact of comprehensive training and certification on cloud adoption. When organizations are trained they see:Significantly greater improvement in top business priorities - 133% greater improvement on employee retention and 56% greater improvement in customer experience scoresAccelerated cloud adoption, reduced time to value, and greater ROI - trained organizations are 10X more likely to implement cloud in 2 yearsGreater performance improvements - in areas like leveraging data analytics, protecting data, and jumpstarting innovationIDC White Paper, sponsored by Google Cloud Learning: "To Maximize Your Cloud Benefits, Maximize Training" - Doc #US48867222, March 2022To learn more, download the white paper.Build Team Skills in Google Cloud Skills Boost Coupling the research above with our commitment to equip more than 40 million people with cloud skills, we are excited to provide business organizations with a comprehensive platform to help address their teams’ cloud skilling needs. Google Cloud Skills Boost combines award winning learning experiences with the ability to earn credentials to validate learning, which can be managed and delivered directly by Google Cloud with enterprise level features. These features allow Organization leaders to manage access and user permissions for their team, and drive effective business outcomes using learning analytics. In addition, administrators will be able to grant access to the Google Cloud content catalog to individuals on their team. This catalog includes hundreds of courses, labs, and credentials authored by Google Cloud experts to help their teams learn and validate their cloud skills.Organizations can trial these features today through an exclusive no cost trial (based on eligibility). Contact your account team to learn more about your eligibility for the trial and how to set up your organization on Google Cloud Skills Boost. New to Google Cloud? Visit ourteam training page and complete the learning assessment to understand your team’s training needs and get connected with an account team. Ready to get started?Google Cloud Learning is committed to helping you accelerate the rate of cloud adoption in your organization through enabling team training. Contact your account team to learn more about your eligibility for the no cost trial and how to set up your organization on Google Cloud Skills Boost.  New to Google Cloud? Visit ourteam training page and complete the learning assessment to understand your team’s training needs and get connected with an account team. Click here to learn more about how comprehensive training impacts cloud adoption.GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.Related ArticleWomen Techmakers journey to Google Cloud certificationGoogle Cloud is creating more opportunities in the credentialing space with a certification journey for Ambassadors of the Women Techmake...Read Article

  • Looking for Good Practice Exams
    by /u/zeeplereddit (Google Cloud Platform Certification) on April 3, 2022 at 10:15 pm

    I have done some googling on practice exams for the Google Cloud Digital Leader exam and I have only come across the Udemy offering. I have done Udemy courses before but I have no idea what their practice exams are like. Is there anyone here with any advice or suggestions in this regard? submitted by /u/zeeplereddit [link] [comments]

  • General availability: Azure Database for PostgreSQL - Hyperscale (Citus) now FedRAMP High compliant
    by Azure service updates on March 30, 2022 at 4:01 pm

    Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure is now compliant with FedRAMP High.

  • Best Podcasts for Cert Seekers?
    by /u/zeeplereddit (Google Cloud Platform Certification) on March 24, 2022 at 10:07 pm

    Hi folks, I am greatly looking forward to embarking on my new adventure of getting several Google Certs. To that end, I am wondering what are the best podcasts to listen to during my commute back and forth from work? The types of podcasts I am hopeful of include those that discuss the exams, goes over sample questions in high detail, interviews people who have taken the test, and also, any podcasts that discuss the concepts that I will be wrapping my head around while I go after the certs. Thanks in advance! submitted by /u/zeeplereddit [link] [comments]

  • Accelerating Government Compliance with Google Cloud’s Professional Service Organization
    by (Training & Certifications) on March 21, 2022 at 5:00 pm

    Did you know that by 2025, enterprise IT spending on public cloud computing will overtake traditional IT spending? In fact, 51% of IT spend in application software, infrastructure software, business process services, and system infrastructure will transition to the public cloud, compared to 41% in 20221.. As enterprises continue to rapidly shift to the cloud, government agencies must prioritize and accelerate security and compliance implementation. In May 2021, the White House issued an Executive Order requiring US Federal agencies to accelerate cloud adoption, embrace security best practices, develop plans to implement Zero Trust architectures, and map implementation frameworks to FedRAMP. The Administration’s focus on secure cloud adoption marks a critical shift to prioritizing cybersecurity at scale. Google Cloud’s Public Sector Professional Services Organization (PSO) has committed to helping customers meet security and compliance requirements in the cloud through specialized consulting engagements. Accelerating Authority to Operate (ATO)The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011 as a government-wide program that promotes the adoption of secure cloud services across the federal government. FedRAMP provides a standardized approach to security and risk assessment for cloud technologies and federal agencies. US Federal agencies are required to utilize and implement FedRAMP cloud service offerings as part of the “Cloud First” federal cloud computing strategy.While Google Cloud provides a FedRAMP-authorized cloud services platform and a robust catalog of FedRAMP-approved products and services (92 services and counting), customers are still tasked with achieving Agency ATO for the products and services they use, and Google Cloud provides many resources to assist customers with this journey. Google Cloud’s FedRAMP package can be accessed by completing the FedRAMP Package Access Request Form and submitting it to info@fedramp.gov. Additionally, customers can use Google’s NIST 800-53 ATO Accelerator as a starting point for documenting control implementation. Finally, Google Cloud’s Public Sector PSO offers the following strategic consulting engagements to help customers streamline the Agency ATO process.Cloud Discover: FedRAMP is a six-week interactive workshop to support customers that are just getting started with the ATO process on Google Cloud. Customers are educated on FedRAMP fundamentals, Google’s security and compliance posture, and how to approach ATO on Google Cloud. Through deep-dive interviews and design sessions, PSO helps customers craft an actionable ATO plan, assess FedRAMP readiness, and develop a conceptual ATO boundary. This engagement helps organizations establish a clear understanding and roadmap for FedRAMP ATO on Google Cloud.FedRAMP Security Review is a ten to twelve week engagement that aids customers in FedRAMP operational readiness. PSO consultants perform detailed FedRAMP architecture reviews to identify potential gaps in NIST 800-53 security control implementation and Google Cloud secure architecture best practices. Findings from the security reviews are shared with the customer along with configuration guidance and recommendations. This engagement helps organizations prepare for the third-party or independent security assessment that is required for FedRAMP ATO.Cloud Deploy: FedRAMP is a multi-month engagement designed to help customers document the details of their FedRAMP System Security Plan (SSP) and corresponding NIST 800-53 security controls, in preparation for Agency ATO on Google Cloud at FedRAMP Low, Moderate, or High. PSO collaborates with customers to develop a detailed technical infrastructure design document and security control matrix capturing evidence of the FedRAMP system architecture, security control implementation, data flows and system components. PSO can also partner with a third-party assessment organization (3PAO) or an independent assessor (IA) to support customer efforts for FedRAMP security assessment. This engagement helps customer system owners prepare for Agency ATO assessment and package submission.Developing a Zero Trust StrategyIn addition to providing FedRAMP enablement, Public Sector PSO has partnered with the Google Cloud Chief Information Security Officer (CISO) team to assist organizations with developing a zero trust architecture and strategy.Zero Trust Foundations is a seven-week engagement co-delivered by Google Cloud’s CISO and PSO teams. CISO and PSO educate customers on zero trust fundamentals, Google’s journey to zero trust through BeyondCorp, and defense in depth best practices. The CISO team walks customers through a Zero Trust Assessment (ZTA) to understand the organization’s current security posture and maturity. Insights from the ZTA enable the CISO team to work with the customer to identify an ideal first-mover workload for zero trust adoption. Following the CISO ZTA, PSO facilitates a deep-dive Zero Trust Workshop (ZTW), collaborating with key customer stakeholders to develop a NIST 800-207 aligned, cloud-agnostic zero trust architecture for the identified first-mover workload. The zero trust architecture is part of a comprehensive zero trust strategy deliverable that is based on focus areas called out in the Office of Management and Budget (OMB) Federal Zero Trust Strategy released January 2022. Scaling Secure Cloud Adoption with PSOPublic Sector PSO enables customer success by sharing our technical expertise, providing cloud strategy, implementation guidance, training and enablement using our proven methodology. As enterprise IT, operations, and organizational models continue to evolve, our goal is to help government agencies accelerate their security and compliance journeys in the cloud.  To learn more about the work we are doing with the federal government, visit cloud.google.com/solutions/federal-government. 1 Gartner Says More Than Half of Enterprise IT Spending in Key Market Segments Will Shift to the Cloud by 2025

  • GCP - PCNE (Thoughts on ACG/A cloud guru) training material
    by /u/friday963 (Google Cloud Platform Certification) on March 20, 2022 at 1:21 am

    Has anyone here done the PCNE exam and used A cloud guru as their primary study resource? If so what is your thoughts on the quality of the study material, is it enough to pass the cert or was much more external resources needed? So far I've done qwiklabs and acg for the PCNE exam, I think qwiklabs has a better lab environment but acg has a better video series. Either way I've not taken the exam but have scheduled it for later this month and am trying to gauge the level of difficulty. submitted by /u/friday963 [link] [comments]

  • exam of GCP Professional Cloud Architect
    by /u/meokey (Google Cloud Platform Certification) on March 11, 2022 at 9:43 pm

    I'm working on the courses of PCA and wondering what the exam would be like ... is there hands-on lab test in the exam? Do I have to remember all these command line tools and their arguments to pass the exam? Thanks. submitted by /u/meokey [link] [comments]

  • Which video course?
    by /u/Bollox427 (Google Cloud Platform Certification) on March 8, 2022 at 8:40 pm

    I would like to learn the fundamentals of GCP and then move on to Security and ML. I know Coursera do courses but is there anyone else of note? How do other course suppliers compare to Coursera? Is Coursera seen as an official education partner for the Google Cloud? submitted by /u/Bollox427 [link] [comments]

  • Women Techmakers journey to Google Cloud certification
    by (Training & Certifications) on March 8, 2022 at 5:00 pm

    In many places across the globe, March is celebrated as Women’s History Month, and March 8th, specifically, marks the day known around the world as International Women’s Day. Here at Google, we’re excited to celebrate women from all backgrounds and are committed to increasing the number of women in the technology industry. Google’s Women Techmakers community provides visibility, community, and resources for women in technology to drive participation and innovation in the field. This is achieved by hosting events, launching resources, and piloting new initiatives with communities and partners globally. By joining Women Techmakers, you'll receive regular emails with access to resources, tools and opportunities from Google and Women Techmakers partnerships to support you in your career.Google Cloud, in partnership with Women Techmakers, has created an opportunity to bridge the gaps in the credentialing space by offering a certification journey for Ambassadors of the Women Techmakers community. Participants will have the opportunity to take part in a free-of-charge, 6-week cohort learning journey, including: weekly 90-minute exam guide review sessions led by a technical mentor, peer-to-peer support in the form of an Online Community, and 12 months access to Google Cloud's on-demand learning platform, Google Cloud Skills Boost. Upon completion of the coursework required in the learning journey, participants will receive a voucher for the Associate Cloud Engineer certification exam. This program, and other similar offerings such as Cloud Career Jumpstart, and the learning journey for members transitioning out of the military, are just a few examples of the investment Google Cloud is making into the future of the technology workforce. Are you interested in staying in the loop with future opportunities with Google Cloud? Join our community here.Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article

  • Study path for GCP Professional Cloud Architect
    by /u/Prime367 (Google Cloud Platform Certification) on March 7, 2022 at 4:50 pm

    Hi Folks, Thanks for your time. I have been working as AWS Architect for 4-5 years, have several AWS certifications, including the Solution architect professional. I am supporting a GCP implementation for the past year or so, and want to go for GCP Cloud Architect certification now. Need some help with Which courses are best for the GCP Cloud Architect exam? Which practice tests do we need to do. I know it's difficult to clear certifications without doing any practice tests. Thanks in advance. submitted by /u/Prime367 [link] [comments]

  • which certification should i do?
    by /u/ParticularFactor353 (Google Cloud Platform Certification) on March 7, 2022 at 4:34 pm

    background: i am a fresher just joined a company and got the ETL domain ,and working on Bigquery scripts and composer, dataflow from past 6 months now i want to do some gcp certification so where should i begin? submitted by /u/ParticularFactor353 [link] [comments]

  • AWS & Azure Certified, how to start on GCP ACE? (Advice requested)
    by /u/skelldog (Google Cloud Platform Certification) on March 6, 2022 at 5:34 am

    Sorry, I know some of this has been discussed, but as things change regulary, I would appreciate any suggestions people are willing to share. I currently hold the three Associate certs from AWS and Azure Administrator Associate. I have been in IT for longer than I care to admit. I was thinking of bypassing Cloud Digital Leader and going directly to ACE? Between work and other options, I have access to most of the popular training programs (ITPro, AcloudGuru, Lynda, Qwiklabs, Acloudguru,Whizlabs, Udemy) I see the most recommendations for the Udemy course by Dan Sullivan, is this my best choice? My time is always limited, and I would like to pick the course that gives the most bang for the buck (Or time in this case) I already purchased the tutorials Dojo self-test last time they had a sale (Jon Bonso does some great work!) I would appreciate any other suggestions anyone is willing to offer. Thanks for reading this! submitted by /u/skelldog [link] [comments]

  • Digital Cloud Leader exam vouchers
    by /u/pillairohit (Google Cloud Platform Certification) on March 3, 2022 at 5:39 pm

    Hi all. Does GCP have online webinars/trainings that gives attendees exam vouchers? Similar to Microsoft Azure online webinars for AZ900? I'm asking for the Digital Cloud Leader certification exam. Thank you for your help and time. submitted by /u/pillairohit [link] [comments]

  • General availability: Asset certification in Azure Purview data catalog
    by Azure service updates on February 28, 2022 at 5:00 pm

    Data stewards can now certify assets that meet their organization's quality standards in the Azure Purview data catalog

  • GCP Associate Cloud Engineer Study Guide
    by /u/ravikirans (Google Cloud Platform Certification) on February 21, 2022 at 12:08 pm

    https://ravikirans.com/gcp-associate-cloud-engineer-exam-study-guide/ To view all the other GCP study Guides, check here https://ravikirans.com/category/gcp/ submitted by /u/ravikirans [link] [comments]

  • Sentinel Installation
    by /u/ribcap (Google Cloud Platform Certification) on February 20, 2022 at 7:30 pm

    Hey Everyone! So I'm in the process of scheduling an exam and have created my biometric profile but can't seem to install Sentinel. Anyone else have this issue? I've tried Chrome, Firefox, and even Safari. I click on the install link and literally nothing happens....nothing downloaded or anything. Any ideas? ​ Edit: I have not actually scheduled the exam...just trying to get everything else in place first. Should I schedule the exam prior to installing Sentinel? ​ Rib submitted by /u/ribcap [link] [comments]

  • Gcp exam fee reimbursement
    by /u/Aamirmir111 (Google Cloud Platform Certification) on February 17, 2022 at 2:15 pm

    If one clears a gcp certification exam.. is there any policy for fee reimbursement?? submitted by /u/Aamirmir111 [link] [comments]

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 16, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Google Cloud Fundamentals Full Course For Beginners Only 2022 | GCP Certified
    by /u/ClayDesk (Google Cloud Platform Certification) on February 14, 2022 at 12:30 pm

    submitted by /u/ClayDesk [link] [comments]

  • Google Cloud Platform Service Comparison
    by /u/lervz_ (Google Cloud Platform Certification) on February 12, 2022 at 3:35 pm

    To anyone who has AWS/Azure background and is new to Google Cloud Platform, you will find this service comparison made by Google very helpful. AWS, Azure, GCP Service Comparison And for those who are preparing for the Google Associate Cloud Engineer Certification exam, check these resources from Tutorials Dojo. Google Certified Associate Cloud Engineer Practice Exams Google Certified Associate Cloud Engineer Study Guide Google Cloud Platform Cheat Sheets submitted by /u/lervz_ [link] [comments]

  • Unified data and ML: 5 ways to use BigQuery and Vertex AI together
    by (Training & Certifications) on February 9, 2022 at 4:00 pm

    Are you storing your data in BigQuery and interested in using that data to train and deploy models? Or maybe you’re already building ML workflows in Vertex AI, but looking to do more complex analysis of your model’s predictions? In this post, we’ll show you five integrations between Vertex AI and BigQuery, so you can store and ingest your data; build, train and deploy your ML models; and manage models at scale with built-in MLOps, all within one platform. Let’s get started!April 2022 update: You can now register and manage BigQuery ML models with Vertex AI Model Registry, a central repository to manage and govern the lifecycle of your ML models. This enables you to easily deploy your BigQuery ML models to Vertex AI for real time predictions. Learn more in this video about “ML Ops in BigQuery using Vertex AI.”Import BigQuery data into Vertex AIIf you’re using Google Cloud, chances are you have some data stored in BigQuery. When you’re ready to use this data to train a machine learning model, you can upload your BigQuery data directly into Vertex AI with a few steps in the console:You can also do this with the Vertex AI SDK:code_block[StructValue([(u'code', u'from google.cloud import aiplatform\r\n\r\ndataset = aiplatform.TabularDataset.create(\r\n display_name="my-tabular-dataset",\r\n bq_source="bq://project.dataset.table_name",\r\n)'), (u'language', u'')])]Notice that you didn’t need to export our BigQuery data and re-import it into Vertex AI. Thanks to this integration, you can seamlessly connect your BigQuery data to Vertex AI without moving your data from the cloud.Access BigQuery public datasets This dataset integration between Vertex AI and BigQuery means that in addition to connecting your company’s own BigQuery datasets to Vertex AI, you can also utilize the 200+ publicly available datasets in BigQuery to train your own ML models. BigQuery’s public datasets cover a range of topics, including geographic, census, weather, sports, programming, healthcare, news, and more. You can use this data on its own to experiment with training models in Vertex AI, or to augment your existing data. For example, maybe you’re building a demand forecasting model and find that weather impacts demand for your product; you can join BigQuery’s public weather dataset with your organization’s sales data to train your forecasting model in Vertex AI.Below, you’ll see an example of importing the public weather data from last year to train a weather forecasting model:Accessing BigQuery data from Vertex AI Workbench notebooksData scientists often work in a notebook environment to do exploratory data analysis, create visualizations, and perform feature engineering. Within a managed Workbench notebook instance in Vertex AI, you can directly access your BigQuery data with a SQL query, or download it as a Pandas Dataframe for analysis in Python.Below, you’ll see how you can run a SQL query on a public London bikeshare dataset, then download the results of that query as a Pandas Dataframe to use in my notebook:Analyze test prediction data in BigQueryThat covers how to use BigQuery data for training models in Vertex AI. Next, we’ll look at integrations between Vertex AI and BigQuery for exporting model predictions. When you train a model in Vertex AI using AutoML, Vertex AI will split your data into training, test, and validation sets, and evaluate how your model performs on the test data. You also have the option to export your model’s test predictions to BigQuery so you can analyze them in more detail:Then, when training completes, you can examine your test data and run queries on test predictions. This can help determine areas where your model didn’t perform as well, so you can take steps to improve your data next time you train your model.Export Vertex AI batch prediction resultsWhen you have a trained model that you’re ready to use in production, there are a few options for getting predictions on that model with Vertex AI:Deploy your model to an endpoint for online predictionExport your model assets for on-device predictionRun a batch prediction job on your modelFor cases in which you have a large number of examples you’d like to send to your model for prediction, and in which latency is less of a concern, batch prediction is a great choice. When creating a batch prediction in Vertex AI, you can specify a BigQuery table as the source and destination for your prediction job: this means you’ll have one BigQuery table with the input data you want to get predictions on, and Vertex AI will write the results of your predictions to a separate BigQuery table.With these integrations, you can access BigQuery data, and build and train models. From there Vertex AI helps you:Take these models into production Automate the repeatability of your model with managed pipelines Manage your models performance and reliability over timeTrack lineage and artifacts of your models for easy-to-manage governance Apply explainability to evaluate feature attributions What’s Next?Ready to start using your BigQuery data for model training and prediction in Vertex AI? Check out these resources:Codelab: Training an AutoML model in Vertex AICodelab: Intro to Vertex AI WorkbenchDocumentation: Vertex AI batch predictionsVideo Series: AI Simplified: Vertex AIGitHub: Example NotebooksTraining: Vertex AI: Qwik StartAre there other BigQuery and Vertex AI integrations you’d like to see? Let Sara know on Twitter at @SRobTweets.Related ArticleWhat is Vertex AI? Developer advocates share moreDeveloper Advocates Priyanka Vergadia and Sara Robinson explain how Vertex AI supports your entire ML workflow—from data management all t...Read Article

  • Curso, videos o link para sacar la gcp cloud engineer associate
    by /u/ahelord (Google Cloud Platform Certification) on February 5, 2022 at 3:26 am

    Hola quisiera preguntar cuál es el mejor curso, videos o página para aprender gcp y pasar la certificación de associate submitted by /u/ahelord [link] [comments]

  • Access role-based Google Cloud training free of charge
    by (Training & Certifications) on February 3, 2022 at 5:00 pm

    Google Cloud is now offering 30 days no-cost access to Google Cloud Skills Boost, the definitive destination for skills development, to complete role-based training. Choose from the following eight learning paths, which include interactive labs and opportunities to earn skill badges to demonstrate your cloud knowledge: Getting Started with Google Cloud, Cloud Architect, Cloud Engineer, Data Analyst, Data Engineer, DevOps Engineer, Machine Learning Engineer and Cloud Developer learning path. Read below to find out more about each learning path. Getting Started with Google CloudIn this path, you’ll learn about Google Cloud fundamentals such as core infrastructure, big data and machine learning (ML). You’ll also find out how to write gcloud commands, use Cloud Shell, deploy virtual machines, and run containerized applications on Google Kubernetes Engine (GKE).Cloud ArchitectIf you’re looking to learn how to design, develop, and manage cloud solutions, this is the path for you. You’ll learn how to perform infrastructure tasks like using Cloud Monitoring, Cloud Identity and Access Management (Cloud IAM), and more. The path will end with how to architect with Google Compute Engine and GKE. For a guided walkthrough of how to get started with Cloud IAM and Monitoring, register here to join me on February 10. You’ll also have a chance to get your questions answered live by Google Cloud experts via chat. Cloud EngineerTo learn how to plan, configure, set up, and deploy cloud solutions, take this learning path. You’ll learn how to get started with Google Compute Engine, Terraform in a cloud environment, GKE, and more. Data AnalystThis learning path will teach you how to gather and analyze data to identify trends and develop valuable insights to help solve problems. You’ll be introduced to BigQuery, Looker, LookML, BigQuery ML, and Data Catalog. Data EngineerInterested in designing and building systems that collect the data used for business decisions? Select this path. You’ll learn how to modernize data lakes and data warehouses with Google Cloud. Afterwards, you will also discover how to use Dataflow for serverless data processing and more. DevOps EngineerA DevOps Engineer is responsible for defining and implementing best practices for efficient and reliable software delivery and infrastructure management. This learning path will show you how to build an SRE culture, use Google Cloud Operations Suite for DevOps, and more. Machine Learning EngineerChoose this path for courses and labs on how to design, build, productionize, optimize, operate, and maintain ML systems. You’ll discover how to use TensorFlow, MLOps tools, VertexAI, and more. Cloud DeveloperA Cloud Developer designs, builds, analyzes, and maintains cloud-native applications. This path will teach you how to use Cloud Run and Firebase for serverless app development. You’ll also learn how to deploy to Kubernetes in Google Cloud. To learn more about the basics of Google Cloud infrastructure before getting started with a learning path, register here. Ready for your role-based training? Sign up here.Related Article2022 Resolution: Learn Google Cloud, free of chargeTechnical practitioners and developers can start 2022 with free introductory training on how to use Google Cloud.Read Article

  • General availability: Azure Database for PostgreSQL – Hyperscale (Citus) new certifications
    by Azure service updates on February 2, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Does anyone have gcp exam vouchers? Or anyone knows where can we get it from?
    by /u/Aamirmir111 (Google Cloud Platform Certification) on February 1, 2022 at 11:36 am

    submitted by /u/Aamirmir111 [link] [comments]

  • Let’s have a chat about using dumps
    by /u/whooyeah (Microsoft Azure Certifications) on January 31, 2022 at 9:49 pm

    This keeps coming up recently so it’s important we have a sticky chat about it that everyone can see. Dumps are essentially cheating. They go against what the exams were designed to do in teaching you azure skills. For this reason they are also against the terms of service from Microsoft for taking the exam. It’s annoying as a professional because you will be in a job interview and hear the hiring manager say things like “MCP exams are worthless because everyone just uses dumps”. Which is heart breaking when you have spent so much time studying the subject knowledge and validating your skills with the exam. As a hiring manager it is annoying because I’ve interviewed candidates in the past with an MCSD and it was clear they had no usable information because they cheated with dumps. You will notice in the side bar rule 1. Breaking this will result in a ban. submitted by /u/whooyeah [link] [comments]

  • This year, resolve to become a certified Professional Cloud Developer – here’s how
    by (Training & Certifications) on January 28, 2022 at 5:00 pm

    Do you have a New Year’s resolution to improve your career prospects? Sign up here for 30 days no-cost access to Google Cloud Skills Boost to help you on your way to becoming a certified Professional Cloud Developer. According to third-party IT training firm Global Knowledge, two Google Cloud Certified Professional certifications topped its list of the highest-paid IT certifications in 2021. Once you register, you’ll have an opportunity to take the Cloud Developer learning path, which consists of on-demand labs and courses, coveringGoogle Cloud infrastructure fundamentals, application development in the cloud, security, monitoring and troubleshooting, Kubernetes, Cloud Run, Firebase and more. Along the way, you’ll have an opportunity to earn skill badges to demonstrate your cloud knowledge and access resources to help you prepare for the Professional Cloud Developer certification.Click to enlargeFor example, once you’ve completed the Google Cloud Fundamentals, Core Infrastructure course, in person or on-demand, you can take the Getting Started With Application Development course, where you’ll learn how to design and develop cloud-native applications that integrate managed services from Google Cloud, including Cloud Client Libraries, the Cloud SDK, and Firebase SDKs, an overview of your storage options, and best practices for using Datastore and Cloud Storage.We’re also thrilled to announce that one of the most popular trainings in the Cloud Developer path, Application Development with Cloud Run, is now available on-demand, in addition to via live instruction. This is a great chance to get up to speed on this fully-managed, serverless compute platform at your own pace. Cloud Run marries the goodness of serverless and containers, and is fast becoming one of the most powerful ways to build and run a true cloud-native application. Moving down the proposed learning path, you can show off your Google Cloud chops with skill badges that you can display as part of your Google Developer Profile alongside your membership in the Google Cloud Innovators program, on social media, and on your resumé. There are a wide variety of interesting skills badge for cloud developers like the Serverless Cloud Run Development Quest, or Deploy to Kubernetes in Google Cloud, and many of them take just a couple of hours to complete.With these classes under your belt and Skills Badges on your profile, you’ll be in a good place to start preparing for the Professional Cloud Developer certification exam, using the proposed exam guide and sample questions to show the way. Here’s to earning your certification in 2022, and to a great future!Related Article2022 Resolution: Learn Google Cloud, free of chargeTechnical practitioners and developers can start 2022 with free introductory training on how to use Google Cloud.Read Article

  • Anybody took Network Professional certification after Jan 5, 2022?
    by /u/yasarfa (Google Cloud Platform Certification) on January 20, 2022 at 11:15 pm

    submitted by /u/yasarfa [link] [comments]

  • GCDL Other practice materials/exams?
    by /u/zoochadookdook (Google Cloud Platform Certification) on January 19, 2022 at 11:41 pm

    Hey all - I'm taking my digital leader this Saturday and after watching googles woefully lacking youtube series I 've gone through the exampro course and will be doing his practice exams tomorrow - but I'm wondering if there's any other decent material out there someone could recommend? It seems like the cloud digital leader doesn't have near the exposure or such most of the others do but I'm hoping someone has a source for a prep guide that has helped them. Thanks a ton! submitted by /u/zoochadookdook [link] [comments]

  • Generally available: Azure Database for PostgreSQL – Hyperscale (Citus): New certifications
    by Azure service updates on January 19, 2022 at 5:00 pm

    New compliance certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • GCP ACE last minute tips
    by /u/Pyro1934 (Google Cloud Platform Certification) on January 19, 2022 at 4:27 am

    submitted by /u/Pyro1934 [link] [comments]

  • Technical Training Made Easy and Accessible, the Google Cloud way
    by (Training & Certifications) on January 14, 2022 at 12:40 pm

    Cloud engineers face a constant barrage of new cloud services, products, and innovations. By late 2021, Google Cloud alone had released thousands of new features across hundreds of services. Couple this with other technologies and service releases, and it quickly becomes a herculean task for engineers to navigate, consume, and stay current on the ever changing technology landscape. We have heard from engineers this often leads to anxiety and frustration as engineers struggle to keep up. They are faced with a plethora of training options but often lack the time and funding. Google Cloud has reinvigorated technical training to make it more informative and applicable to public sector customers and partners. We aim to maximize your training experience so you can get targeted training when you need it. The Google Cloud Public Sector Technical Learning Series addresses customer feedback and provides fun and practical training. Sessions are currently running every two weeks. “Short and sweet” technical topics geared to subjects you care aboutGeneric training doesn't always resonate with public sector technologists. Our new curriculum targets specific public sector use cases, is delivered by customer engineers, and can be accomplished in less than two hours.  This means participants can apply the learnings directly to real-life challenges quickly. Easy to find, easy to enroll Training opportunities should always be at your fingertips. Our automated training platform will ensure that you only need to enroll once. The system will automatically notify you of upcoming sessions so you can plan in advance and at your convenience. Sessions will be offered on a recurring basis to meet the needs of your organization.Fun and engagingTypical training sessions often include a sea of glazed eyes, unresponsive to basic prompts, falling asleep at our desks, we have all been there. But it doesn't have to be this way. Our goal is to infuse Google culture into our training through interactive exchanges and tangible rewards to keep participants inspired and engaged.Traditional technology training doesn’t always help you navigate the nuts and bolts of how to effectively introduce a product into an organization. But we know that technology doesn’t operate in isolation; it supports and becomes part of a living organism, managed by humans and confined by other components of an organization’s structure (e.g. existing systems or decentralized business units). Part of a larger community of like-minded engineersLearning with - and from - a community of peers is one way to overcome the challenges and complexities of applying new technology within a complex organization. We created the Public Sector Connect community for this very reason. It is one example of how we surface best practices for public sector innovators. During weekly “Coffee Hours” and working sessions, our community members share their journey and lessons learned with each other. We know that innovation evolves through iteration and diverse perspectives, and Public Sector Connect is committed to helping surface critical challenges and solutions, and connecting those who are solving similar problems. Join the community today.

  • 2022 Resolution: Learn Google Cloud, free of charge
    by (Training & Certifications) on January 12, 2022 at 5:00 pm

    Start your 2022 New Year’s resolutions by learning at no cost how to use Google Cloud with the following training opportunities:30 day access to Google Cloud Skills Boost Register by January 31, 2022 and claim 30 days free access to Google Cloud Skills Boost to complete the Getting Started with Google Cloud learning path. Google Cloud Skills Boost is the definitive destination for skills development where you can personalize learning paths, track progress, and validate your newly-earned expertise with skill badges. The Getting Started with Google Cloud learning path will give you the opportunity to earn three skill badges after you complete hands-on labs and courses designed for aspiring cloud engineers and architects. It covers the fundamentals of Google Cloud including core infrastructure, big data and ML, writing gcloud commands, using Cloud Shell, deploying virtual machines, and running containerized applications on GKE.Cloud OnBoard: half day training on getting started with Google Cloud fundamentalsAttend the Getting Started Cloud OnBoard on January 20 for a comprehensive Google Cloud orientation. Google Cloud experts will show you how to execute your compute, available storage options, how to secure your data, and available Google Cloud managed services. Cloud Study Jam: expert-guided hands-on labGoogle Cloud experts will walk you through a hands-on lab included in Google Cloud Skill Boost’s Getting Started with Google Cloud learning path when you join our Cloud Study Jam on January 27. Google Cloud experts will also answer questions live via chat during this event.Related ArticleBuild your data analytics skills with the latest no cost BigQuery trainingsTo help you make the most of BigQuery, we’re offering no cost, on-demand training opportunitiesRead Article

  • Google Cloud doubles-down on ecosystem in 2022 to meet customer demand
    by (Training & Certifications) on January 11, 2022 at 3:00 pm

    Google Cloud has been a partner-focused business from day one. As we reflect on 2021 and look forward to what’s ahead, I want to say “thank you” to our ecosystem for all of the amazing innovations and services you provided our mutual customers over the last year. In 2021, we faced unprecedented demand from businesses as they turned to the cloud to digitally transform their organizations. This surge in cloud deployments meant we increasingly turned to our ecosystem to help customers create customized implementations with our systems integrators (SIs), build packaged solutions with our independent software vendors (ISVs), or coach employees how to best use new cloud technologies with our consulting and training firms.To continue meeting growing customer demand in 2022 and beyond, I am pleased to share that we are bringing together our ecosystem and channel sales teams into a single partner organization to bring a more streamlined go-to-market approach for our partners and customers. In support of this change, we plan to more than double our spend in support of our partner ecosystem over the next few years, including rolling out increased co-innovation resources for partners, more incentives and co-marketing funds, and a larger commitment to training and enablement—all with a goal of continuing our joint momentum in the market.Providing leads and new go-to-market programs for consulting partnersThe need for highly-skilled partners to accelerate digital transformation for customers has never been greater, and our ecosystem of services partners continues to gain tremendous opportunities to deliver high-value implementation and professional services, industry solutions, and digital transformation expertise. In 2022, we are investing in our SIs by:Moving to a partner-led, partner-delivered approach for professional services needed by our customers, particularly through expanded work with partners. This will include new programs for lead generation and lead sharing with our SI partners.Increasing our investment with SIs in deploying go-to-market programs for industry-specific SI solutions, as well as creating more pre-integrated industry ISV and Google Cloud AI solutions together with our SI partners.Accelerating critical training, specialization, and certification programs in support of our goal of training 40 million new people on Google Cloud. This includes new programs for experienced practitioners, and a hybrid learning modality that combines online and in-person learning supported by Google mentors. Accelerating growth for ISV partners with more resourcesIn 2021, our ISV partners helped build unique integrations with Google Cloud capabilities in AI, ML, data, analytics, and security for our mutual customers. In fact, our marketplace third-party transaction value was up more than 500% YoY from 2020 (Q1-Q3). In 2022, we are deepening our commitment to our ISV partners’ success by:Making significant investments in new Google Cloud Marketplace functionality, including adding new technical resources that will help accelerate how ISVs distribute their apps and solutions. Coupled with this, we’re also lowering the Marketplace rate to 3% for eligible solutions, helping drive more adoption with customers. Expanding our regional sales and technical teams who are dedicated to supporting ISVs, and at the same time increasing market development funds (MDF) to drive further sales growth for our ISVs.Dedicating additional technical resources to help ISVs move to more modern SaaS delivery models, as well as to optimize and supercharge their apps for their customers by leveraging Google Cloud technologies.Creating new monetization models for ISVs using Google Distributed Cloud to deliver products across hybrid environments, multiple clouds, and at the network edge. ISVs will be able to build industry-specific 5G and edge solutions leveraging our ecosystem of telecommunication providers and 140+ Google network edge locations.Increasing funds for ISVs to accelerate customer cloud migrations by offsetting infrastructure costs during migration (ISV Cloud Acceleration Program).Launching new program incentives to drive a thriving channelSince the launch of our Partner Advantage program, we have increased funds for our channel partners tenfold. In 2021, to extend this momentum, we expanded our incentive portfolio for resellers to support their long-term growth and profitability. In 2022, we are increasing our investment in partner programs even further, including:Significantly expanding incentives to reward partners who source and grow customer engagements, and for those who deliver exceptional customer experiences and critical implementation services.Evolving to industry-standard compensation plans for our direct sellers, and rewarding our channel partners for implementation (vs. reselling) for larger enterprise customers.Significantly increasing co-marketing funding for our channel partners to accelerate demand generation and time-to-close.Growing our learning resources, including launching more than 10 new Expertises and Specializations, and expanding our certification programs for partners to deliver the highest levels of Google Cloud expertise to customers.Launching a new program for resellers to support customers via offerings on the Google Cloud Marketplace.Sharing a toolkit to bring the best of Google’s diversity, equity, and inclusion (DEI) resources to our ecosystem of partners, including programs to develop inclusive marketing strategies and deploy DEI training within their own organizations.As we kick off 2022, it’s clear that the trend of digital transformation will only continue to drive customer demand for the cloud and, more importantly, a need for services, support, and solutions from our partners. We believe that by centralizing our partner groups into a single organization and by more than doubling our spend in support of our partner ecosystem over the next few years, we will help accelerate our joint momentum in the market around the world. For more information on these new programs and resources, please reach out to your Partner Account Manager or login to your Partner Advantage portal at partneradvantage.goog.

  • Are you a multicloud engineer yet? The case for building skills on more than one cloud
    by (Training & Certifications) on January 7, 2022 at 5:00 pm

    Over the past few months, I made the choice to move from the AWS ecosystem to Google Cloud — both great clouds! — and I think it’s made me a stronger, more well-rounded technologist.But I’m just one data point in a big trend. Multicloud is an inevitability in medium-to-large organizations at this point, as I and others have been saying for awhile now. As IT footprints get more complex, you should expect to see a broader range of cloud provider requirements showing up where you work and interview. Ready or not, multicloud is happening.In fact, Hashicorp’s recent State of Cloud Strategy Survey found 76% of employers are already using multiple clouds in some fashion, with more than 50% flagging lack of skills among their employees as a top challenge to survival in the cloud.That spells opportunity for you as an engineer. But with limited time and bandwidth, where do you place your bets to ensure that you’re staying competitive in this ever-cloudier world?You could pick one cloud to get good at and stick with it; that’s a perfectly valid career bet. (And if you do bet your career on one cloud, you should totally pick Google Cloud! I have reasons!) But in this post I’m arguing that expanding your scope of professional fluency to at least two of the three major US cloud providers (Google Cloud, AWS, Microsoft Azure) opens up some unique, future-optimized career options.What do I mean by ‘multicloud fluency’? For the sake of this discussion, I’m defining “multicloud fluency” as a level of familiarity with each cloud that would enable you to, say, pass the flagship professional-level certification offered by that cloud provider–for example, Google Cloud’s Professional Cloud Architect certification or AWS’s Certified Solutions Architect Professional. Notably, I am not saying that multicloud fluency implies experience maintaining production workloads on more than one cloud, and I’ll clarify why in a minute.How does multicloud fluency make you a better cloud engineer?I asked the cloud community on Twitter to give me some examples of how knowledge of multiple clouds has helped their careers, and dozens of engineers responded with a great discussion.Turns out that even if you never incorporate services from multiple clouds in the same project — and many people don’t! — there’s still value in understanding how the other cloud lives.Learning the lingua franca of cloudI like this framing of the different cloud providers as “Romance languages” — as with human languages in the same family tree, clouds share many of the same conceptual building blocks. Adults learn primarily by analogy to things we’ve already encountered. Just as learning one programming language makes it easier to learn more, learning one cloud reduces your ramp-up time on others.More than just helping you absorb new information faster, understanding the strengths and tradeoffs of different cloud providers can help you make the best choice of services and architectures for new projects. I actually remember struggling with this at times when I worked for a consulting shop that focused exclusively on AWS. A client would ask “What if we did this on Azure?” and I really didn’t have the context to be sure. But if you have a solid foundational understanding of the landscape across the major providers, you can feel confident — and inspire confidence! — in your technical choices.Becoming a unicornTo be clear, this level of awareness isn’t common among engineering talent. That’s why people with multicloud chops are often considered “unicorns'' in the hiring market. Want to stand out in 2022? Show that you’re conversant in more than just one cloud. At the very least, it expands the market for your skills to include companies that focus on each of the clouds you know.Taking that idea to its extreme, some of the biggest advocates for the value of a multicloud resumé are consultants, which makes sense given that they often work on different clouds depending on the client project of the week. Lynn Langit, an independent consultant and one of the cloud technologists I most respect, estimates that she spends about 40% of her consulting time on Google Cloud, 40% on AWS, and 20% on Azure. Fluency across providers lets her select the engagements that are most interesting to her and allows her to recommend the technology that provides the greatest value.But don’t get me wrong: multicloud skills can also be great for your career progression if you work on an in-house engineering team. As companies’ cloud posture becomes more complex, they need technical leaders and decision-makers who comprehend their full cloud footprint. Want to become a principal engineer or engineering manager at a mid-to-large-sized enterprise or growing startup? Those roles require an organization-wide understanding of your technology landscape, and that’s probably going to include services from more than one cloud. How to multicloud-ify your careerWe’ve established that some familiarity with multiple clouds expands your career options. But learning one cloud can seem daunting enough, especially if it’s not part of your current day job. How do you chart a multicloud career path that doesn’t end with you spreading yourself too thin to be effective at anything?Get good at the core conceptsYes, all the clouds are different. But they share many of the same basic approaches to IAM, virtual networking, high availability, and more. These are portable fundamentals that you can move between clouds as needed. If you’re new to cloud, an associate-level solutions architect certification will help you cover the basics. Make sure to do hands-on labs to help make the concepts real, though — we learn much more by doing than by reading.Go deep on your primary cloudFundamentals aside, it’s really important that you have a native level of fluency in one cloud provider. You may have the opportunity to pick up multicloud skills on the job, but to get a cloud engineering role you’re almost certainly going to need to show significant expertise on a specific cloud.Note: If you’re brand new to cloud and not sure which provider to start with, my biased (but informed) recommendation is to give Google Cloud a try. It has a free tier that won’t bill you until you give permission, and the nifty project structure makes it really easy to spin up and tear down different test environments.It’s worth noting that engineering teams specialize, too; everybody has loose ends, but they’ll often try to standardize on one cloud provider as much as they can. If you work on such a team, take advantage of the opportunity to get as much hands-on experience with their preferred cloud as possible.Go broad on your secondary cloudYou may have heard of the concept of T-shaped skills. A well-rounded developer is broadly familiar with a range of relevant technologies (the horizontal part of the “T”), and an expert in a deep, specific niche. You can think of your skills on your primary cloud provider as the deep part of your “T”. (Actually, let’s be real — even a single cloud has too many services for any one person to hold in their heads at an expert level. Your niche is likely to be a subset of your primary cloud’s services: say, security or data.)We could put this a different way: build on your primary cloud, get certified on your secondary. This gives you hirable expertise on your “native” cloud and situational awareness of the rest of the market. As opportunities come up to build on that secondary cloud, you’ll be ready.I should add that several people have emphasized to me that they sense diminishing returns when keeping up with more than one secondary cloud. At some point the cognitive switching gets overwhelming and the additional learning doesn’t add much value. Perhaps the sweet spot looks like this: 1< 2 > 3.Bet on cloud-native services and multicloud toolingThe whole point of building on the cloud is to take advantage of what the cloud does best — and usually that means leveraging powerful, native managed services like Spanner and Vertex AI. On the other hand, the cloud ecosystem has now matured to the point where fantastic, open-source multicloud management tooling for wrangling those provider-specific services is readily available. (Doing containers on cloud? Probably using Kubernetes! Looking for a DevOps role? The team is probably looking for Terraform expertise no matter what cloud they major on.) By investing learning time in some of these cross-cloud tools, you open even more doors to build interesting things with the team of your choice.Multicloud and youWhen I moved into the Google Cloud world after years of being an AWS Hero, I made sure to follow a new set of Google Cloud voices like Stephanie Wong and Richard Seroter. But I didn’t ghost my AWS-using friends, either! I’m a better technologist (and a better community member) when I keep up with both ecosystems. “But I can hardly keep up with the firehose of features and updates coming from Cloud A. How will I be able to add in Cloud B?” Accept that you can’t know everything. Nobody does. Use your broad knowledge of cloud fundamentals as an index, read the docs frequently for services that you use a lot, and keep your awareness of your secondary cloud fresh:Follow a few trusted voices who can help you filter the signal from the noiseAttend a virtual event once a quarter or so; it’s never been easier to access live learningBuild a weekend side project that puts your skills into practiceUltimately, you (not your team or their technology choices!) are responsible for the trajectory of your career. If this post has raised career questions that I can help answer, please feel free to hit me up on Twitter. Let’s continue the conversation.Related ArticleFive do’s and don’ts of multicloud, according to the expertsWe talked with experts about why to do multicloud, and how to do it right. Here is what we learned.Read Article

  • How to become a certified cloud professional
    by (Training & Certifications) on December 15, 2021 at 6:00 pm

    Achieving a certification is seen as a stamp of approval validating one's skills and expertise to perform a given job role. Google Cloud Certification program brings a framework to help equip organizations develop talent for the future. These certifications are not just about Google Cloud technologies. Just like the real-world, examinees are expected to know the vast array of technologies they may encounter in their day-to-day jobs. The question you might be asking yourself is: How do I become a certified cloud professional? First, let us share some tips with you on gaining hands-on experience with Google Cloud by introducing skill badges. Watch this video to learn more:The more skill badges you achieve, the stronger your readiness becomes.The next question you may be asking yourself is: should I go for the associate or the professional level exam?The associate level certification is focused on the fundamental skills of deploying, monitoring, and maintaining projects on Google Cloud. This certification is a good starting point for those new to cloud and can be used as a path to professional level certifications. Watch this video to learn about the Associate Cloud Engineer exam by Google Cloud.Professional certifications span key technical job functions and assess advanced skills in design, implementation, and management. These certifications are recommended for individuals with industry experience and familiarity with Google Cloud products and solutions.We’d recommend you start with reviewing the certification exam website and look for the descriptions of the role you think is most appropriate for you. The exam guide in particular is a helpful resource because it outlines the domains covered by the exam. As an example, check out the exam guide and the introduction video for the Professional Cloud Developer certification.Setting a goal of achieving a certification is a personal and professional milestone! As much as we wish all of you interested in Google Cloud certification best of luck in earning them, we have one final reminder: please study to learn, not just to pass. The learning mindset is what keeps the technology exploration journey interesting. Happy learning and send your questions our way on LinkedIn to Magda Jary and Priyanka Vergadia.

  • Azure Database for PostgreSQL – Hyperscale (Citus): New toolkit certifications generally available
    by Azure service updates on December 15, 2021 at 5:00 pm

    New Toolkit certifications are now available on Azure Database for PostgreSQL – Hyperscale (Citus), a managed service running the open-source Postgres database on Azure.

  • Machine learning, Google Kubernetes Engine, and more: 10 free training offers to take advantage of before 2022
    by (Training & Certifications) on December 13, 2021 at 5:00 pm

    We’re continuing to offer learning opportunities at no charge to help you grow your Google Cloud skills. Here are ten training offers you can take advantage of before the end of this year to keep building your knowledge of machine learning, Google Kubernetes Engine (GKE), and more. Register here by January 10, 2022 to receive 30 days no cost access to Google Cloud Skills Boost*. As the definitive destination for skills development, Google Cloud Skills Boost has 700 hands-on labs, role-based courses, skill badges, and certification resources. You’ll also be able to personalize learning paths, track progress, and validate your newly-earned expertise. For additional learning opportunities, check out our on-demand trainings below. Getting started with Google CloudTo learn about Google Cloud fundamentals,sign up for our introductory Cloud OnBoard. This comprehensive half-day training will take you through the ins and outs of some of Google Cloud's most impactful tools, how to maximize your VM instances, and the best ways to approach your container strategy.If you’ve just decided to add Google Cloud to your existing cloud skills from other providers like AWS or Azure, start your Google Cloud journey with this training. The training will show you to adapt your knowledge from other cloud providers and have a seamless transition. Machine learning and data analytics Discover how to use Vertex AI, Google Cloud’s new unified machine learning (ML) platform through two training opportunities.Sign up for the “Data Science on Google Cloud” training to learn how to analyze datasets, experiment with different modeling techniques, deploy trained models into production, and manage ML operations through the model lifecycle.Register here for an end-to-end demo on how to train and serve a custom TensorFlow model on Vertex AI. To find out how to use BigQuery data warehouse and Looker for data analytics, sign up here. You’ll be taught how to model, analyze, and visualize your data in less than 30 minutes. Kubernetes and serverless  New to Kubernetes or need a refresher?Take our "Getting Started with Kubernetes" training. You’ll have an opportunity to hear from Google Cloud experts like Kelsey Hightower, Bobby Allen, Kaslin Fields, and Maria Cruz as well access hands-on tutorials. Get hands-on experience with GKE by registering here. You’ll be taught how to manage workloads and clusters at scale so that you can optimize time and cost in this training. Register here to learn through demos and talks from Google Cloud executives and experts, how to use Autopilot, GKE’s new mode of operation, and see what’s in store for GKE in the future. Sign up for the "Power of Serverless" training to find how to run fast, error-free apps with Serverless App Acceleration. You’ll discover how to run internal apps, do real-time enterprise app data processing, and more on serverless. *To unlock your free 30 day access to Google Cloud Skills Boost, you have to first complete a lab. If you’re new to Google Cloud Skills Boost, you will need to create an account and then complete a lab to obtain your free access.Related ArticleBuild your data analytics skills with the latest no cost BigQuery trainingsTo help you make the most of BigQuery, we’re offering no cost, on-demand training opportunitiesRead Article

  • Join Cloud Learn to build your Google Cloud skills at no cost, regardless of experience level
    by (Training & Certifications) on December 1, 2021 at 5:00 pm

    We recently announced a new goal of equipping more than 40 million people with Google Cloud skills. To help achieve this goal, we’re hosting Cloud Learn from Dec. 8-9 (for those in Europe, the Middle East, or Africa, the event will be from Dec. 9-10 and for those in Japan, you can access the event here), a no-cost digital training event for developers, IT professionals, and data practitioners at all career levels. The interactive event will have live technical demos, Q&As, career development workshops, and more covering everything from Google Cloud fundamentals to certification prep. Here’s a more in-depth look at what to expect from Cloud Learn:Hear from Google Cloud executives and customers Thomas Kurian, Google Cloud’s CEO, and I will kick off the first day by discussing how you can uplevel your career. The second day will begin with technical leaders from Twitter, Lloyds Banking Group, and Ingka Group Digital speaking with John Jester, our vice president of customer experience, about the impact of Google Cloud training and certifications they’ve seen in their organizations. Afterwards, you can choose from role-based tracks and join the training sessions most relevant to you. Training for developers For developers, Kubernetes expert Kaslin Fields will be guiding you through the following trainings during the first day: Introduction to Building with Kubernetes, Create and Configure Google Kubernetes Engine (GKE) Clusters, Deploy and Scale in Kubernetes, and Securing GKE for Your Google Cloud Platform Access. Google customer engineers Murriel Perez McCabe and Jay Smith will discuss how to prepare for the Google Cloud Professional Cloud Developer and Professional Cloud DevOps Engineer certifications on the second day. Jay will also walk you through a live demo of how to build a serverless app that creates PDF files with Cloud Run. Carter Morgan, a Google Cloud developer advocate, will end the second day with a session on actionable strategies for managing imposter syndrome in tech. Learning opportunities for IT professionalsIT professionals will have the opportunity on day one to learn from Jasen Baker, a technical trainer, how to get started with Google Cloud. Jasen will walk you through how to execute compute, store, and secure your data as well deploy and monitor applications. On the second day, you can hear from Google Cloud Certified Fellow Konrad Clapa and Cori Peele, a Google Cloud customer engineer, about how to prepare for Google Cloud’s Associate Cloud Engineer and Professional Cloud Architect certifications.  Google Cloud experts will also take you through a live demo of how to create virtual machines that run different operating systems using the Google Cloud Console and the gcloud command line. Day two will conclude with a discussion from leadership consultant, Selena Rezvani, on how to negotiate for yourself at work, and speak up for what you want and need.Training sessions for data practitioners Lak Lakshmanan, Google Cloud’s analytics and AI solutions director, and product manager Leigha Jarett will show you how to use BigQuery, Cloud SQL, and Spark to dive into recommendation and prediction systems on the first day. They’ll also teach you how to use real time dashboards and derive insights using machine learning. Author Dan Sullivan and Google Cloud learning portfolio manager Doug Kelly will begin the second day with a discussion on how to earn Google Cloud’s Professional Data Engineer and Professional Machine Learning Engineer certifications. You’ll also learn how Google Cloud Video Intelligence makes videos searchable and discoverable by extracting metadata with an easy to use REST API through a live demo on day two. Cross cultural business speaker Jessica Chen will end the last day with actionable communication tips and techniques to lead in a virtual and hybrid world.Register here to save your virtual seat at Cloud Learn.Related ArticleTraining more than 40 million new people on Google Cloud skillsTo help more than 40 million people build cloud skills, Google Cloud is offering limited time no-cost access to all training contentRead Article

  • A learning journey for members transitioning out of the military
    by (Training & Certifications) on November 11, 2021 at 5:00 pm

    Each year, about 200,000 U.S. veterans transition out of military service. However, despite being well-equipped to work in the tech sector, many of these veterans are unable to identify a clear career path. In fact, a2019 survey found that many veterans feel unprepared for the job market after their service and are unaware of how GI benefits can be used for learning and training. That’s where Google Cloud skills training comes in. This August, 50 service members began a 12-week, Google Cloud-sponsored Certification Learning Journey toward achieving the Google Cloud Associate Cloud Engineer certification. Participants had access to online, on-demand learning assets, bi-weekly technical review sessions taught by Google engineers, and mentoring through Google Groups. Upon completion of the online training, participants will now attempt to pass the Google Cloud Associate Cloud Engineer certification exam. A passing grade will grant these military members a new cloud certification, which is a great way to demonstrate cloud skills to the larger IT market. Getting certified can open up internships, job opportunities and help career progression for our veterans.Why get Google Cloud certified? Cloud computing is one of the fastest-growingareas in IT. As cloud adoption grows rapidly, so do the ways that cloud technologies can solve key business problems in the real world. Cloud certifications are a great way to demonstrate technical skills to the broader market beyond the military. Cloud skills are also in demand. More than 90% of IT leaders say they're looking to grow their cloud environments in the next several years, yet more than 80% of those same leaders identified a lack of skills and knowledge within their employees as a barrier to this growth. Unfortunately, according to a 2020 survey report, the IT talent shortage continues to be a leading corporate concern, with 86% of respondents believing it will continue to slow down cloud projects. A shrinking pool of qualified candidates poses a top business risk for global executives as they struggle to find and retain talent to meet their strategic objectives.Want to learn more?As we wrap up this first Certification Learning Journey for Service Members, plans are underway to expand to new cohorts in the coming months. The classes are completely virtual, and all training is on-demand so that participants can access their coursework anytime, anywhere via the web or mobile device. To determine whether you (or someone you know) would be a great fit for this certification journey:Watch the Certification Prep: Associate Cloud Engineer webinarComplete the Skill IQ Assessment via PluralsightReview the Associate Cloud Engineer Certification Exam Guide Take the Associate Cloud Engineer sample questionWho’s eligible? U.S. military members transitioning out of service and veterans with CS/CIS related education or relevant work experience (IT, CyberSecurity, Networking, Security,  Information Systems) are eligible for the program. Although the ability to code is not required, familiarity with the following IT concepts is highly recommended: virtual machines, operating systems, storage and file systems, networking, databases, programming, and working with Linux at the command line.At Google Cloud, we are committed to creating training and certification opportunities for transitioning service members, veterans, and military spouses to help them thrive in a cloud-first world. Stay tuned for updates early next year!Related ArticleCloud Career Jump Start: our virtual certification readiness programCloud Career Jump Start is Google Cloud’s first virtual Certification Journey Learning program for underrepresented communities.Read Article

  • Azure VMware Solution achieves FedRAMP High Authorization
    by Azure service updates on September 15, 2021 at 11:53 pm

    With this certification, U.S. government and public sector customers can now use Azure VMware Solution as a compliant FedRAMP cloud computing environment, ensuring it meets the demanding standards for security and information protection.

  • Azure expands HITRUST certification across 51 Azure regions
    by Azure service updates on August 23, 2021 at 9:38 pm

    Azure expands offering and region coverage to Azure customers with its 2021 HITRUST validated assessment.

  • Azure Database for PostgreSQL - Hyperscale (Citus) now compliant with additional certifications
    by Azure service updates on June 9, 2021 at 4:00 pm

    New certifications are now available for Hyperscale (Citus) on Azure Database for PostgreSQL, a managed service running the open-source Postgres database on Azure.

  • Azure expands PCI DSS certification
    by Azure service updates on March 15, 2021 at 5:02 pm

    You can now leverage Azure’s Payment Card Industry Data Security Standard (PCI DSS) certification across all live Azure regions.

  • 172 Azure offerings achieve HITRUST certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure expands its depth of offerings to Azure customers with its latest independent HITRUST assessment.

  • Azure achieves its first PCI 3DS certification
    by Azure service updates on February 3, 2021 at 10:24 pm

    Azure’s PCI 3DS Attestation of Compliance, PCI 3DS Shared Responsibility Matrix, and PCI 3DS whitepaper are now available.

  • Azure Databricks Achieves FedRAMP High Authorization on Microsoft Azure Government
    by Azure service updates on November 25, 2020 at 5:00 pm

    With this certification, customers can now use Azure Databricks to process the U.S. government’s most sensitive, unclassified data in cloud computing environments, including data that involves the protection of life and financial assets.

  • New SAP HANA Certified Memory-Optimized Virtual Machines now available
    by Azure service updates on November 12, 2020 at 5:01 pm

    We are expanding our SAP HANA certifications, enabling you to run production SAP HANA workloads on the Edsv4 virtual machines sizes.

  • Azure achieves Service Organization Controls compliance for 14 additional services
    by Azure service updates on November 11, 2020 at 5:10 pm

    Azure gives you some of the industry’s broadest certifications for the critical SOC 1, 2, and 3 compliance offering, which is widely used around the world.

  • Announcing the unified Azure Certified Device program
    by Azure service updates on September 22, 2020 at 4:05 pm

    A unified and enhanced Azure Certified Device program was announced at Microsoft Ignite, expanding on previous Microsoft certification offerings that validate IoT devices meet specific capabilities and are built to run on Azure. This program offers a low-cost opportunity for device builders to increase visibility of their products while making it easy for solution builders and end customers to find the right device for their IoT solutions.

  • IoT Security updates for September 2020
    by Azure service updates on September 22, 2020 at 4:05 pm

    New Azure IoT Security product updates include improvements around monitoring, edge nesting and the availability of Azure Defender for IoT.

  • Azure Certified for Plug and Play is now available
    by Azure service updates on August 27, 2020 at 12:21 am

    IoT Plug and Play device certification is now available from Microsoft as part of the Azure Certified device program.

  • Azure France has achieved GSMA accreditation
    by Azure service updates on August 6, 2020 at 5:45 pm

    Azure has added an important compliance offering for telecommunications in France, the Global System for Mobile Communications Association (GSMA) Security Accreditation Scheme for Subscription Management (SAS-SM).

  • Azure Red Hat OpenShift is now ISO 27001 certified
    by Azure service updates on July 21, 2020 at 4:00 pm

    To help you meet your compliance obligations across regulated industries and markets worldwide, Azure Red Hat OpenShift is now ISO 27001 certified.

  • Azure Lighthouse updates—April 2020
    by Azure service updates on June 1, 2020 at 4:00 pm

    Several critical updates have been made to Azure Lighthouse, including FEDRAMP certification, delegation opt-out, and Azure Backup reports.

  • Azure NetApp Files—New certifications, increased SLA, expanded regional availability
    by Azure service updates on May 19, 2020 at 4:00 pm

    The SLA guarantee for Azure NetApp Files has increased to 99.99 percent. In addition, NetApp Files is now HIPAA and FedRAMP certified, and regional availability has been increased.

  • Kubernetes on Azure Stack Hub in GA
    by Azure service updates on February 25, 2020 at 5:00 pm

    We now support Kubernetes cluster deployment on Azure Stack Hub, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS Engine on Azure Stack Hub.

  • Azure Firewall Spring 2020 updates
    by Azure service updates on February 19, 2020 at 5:00 pm

    Excerpt: Azure Firewall is now ICSA Labs certified. In addition, several key Azure Firewall capabilities have recently been released into general availability (GA) and preview.

  • Azure IoT C# and Java SDKs release new long-term support (LTS) branches
    by Azure service updates on February 14, 2020 at 5:00 pm

    The Azure IoT Java and C# SDKs have each now released new long-term support (LTS) branches.

  • HPC Cache receives ISO certifications, adds stopping feature, and new region
    by Azure service updates on February 11, 2020 at 5:00 pm

    Azure HPC Cache has received new SO27001, 27018 and 27701 certifications, adds new features to manage storage caching in performance-driven workloads and expands service access to Korea Central.

  • Azure Blueprint for FedRAMP High now available in new regions
    by Azure service updates on February 3, 2020 at 5:00 pm

    The Azure Blueprint for FedRAMP High is now available in both Azure Government and Azure Public regions. This is in addition to the Azure Blueprint for FedRAMP Moderate released in November, 2019.

  • Azure Databricks Is now HITRUST certified
    by Azure service updates on January 22, 2020 at 5:01 pm

    Azure Databricks is now certified for the HITRUST Common Security Framework (HITRUST CSF®), the most widely coveted security accreditation for the healthcare industry. With this certification, health care customers can now use volumes of clinical data to drive innovation using Azure Databricks, without any worry about security and risk.

  • Microsoft plans to establish new cloud datacenter region in Qatar
    by Azure service updates on December 11, 2019 at 8:00 pm

    Microsoft recently announced plans to establish a new cloud datacenter region in Qatar to deliver its intelligent, trusted cloud services and expand the Microsoft global cloud infrastructure to 55 cloud regions in 20 countries.

  • Azure NetApp Files HANA certification and new region availability
    by Azure service updates on November 4, 2019 at 5:00 pm

    Azure NetApp Files , one of the fastest growing bare-metal Azure services, has achieved SAP HANA certification for both scale-up and scale-out deployments.

  • Azure achieves TrueSight certification
    by Azure service updates on September 23, 2019 at 5:00 pm

    Azure achieved certification for TruSight, an industry-backed, best-practices third-party assessment utility.

  • IoT Plug and Play Preview is now available
    by Azure service updates on August 21, 2019 at 4:00 pm

    With IoT Plug and Play Preview, solution developers can start using Azure IoT Central to build solutions that integrate seamlessly with IoT devices enabled with IoT Plug and Play.

  • View linked GitHub activity from the Kanban board
    by Azure service updates on June 21, 2019 at 5:00 pm

    We continue to enhance the Azure Boards integration with GitHub. Now you can get information of your linked GitHub commits, pull requests and issues on your Kanban board. This information will give you a quick sense of where an item is at and allow you to directly navigate out to the GitHub commit, pull request, or issue for more details.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on April 2, 2019 at 9:08 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
    by Azure service updates on March 26, 2019 at 9:06 pm

    Video Indexer has received new certifications to fit with enterprise certification requirements.

  • Azure South Africa regions are now available
    by Azure service updates on March 7, 2019 at 6:00 pm

    Azure services are available from new cloud regions in Johannesburg (South Africa North) and Cape Town (South Africa West), South Africa. The launch of these regions is a milestone for Microsoft.

  • Azure DevOps Roadmap update for 2019 Q1
    by Azure service updates on February 14, 2019 at 8:22 pm

    We updated the Features Timeline to provide visibility on our key investments for this quarter.

  • Azure Stack—FedRAMP High documentation now available
    by Azure service updates on November 1, 2018 at 7:00 pm

    FedRAMP High documentation is now available for Azure Stack customers.

  • Kubernetes on Azure Stack in preview
    by Azure service updates on November 1, 2018 at 7:00 pm

    We now support Kubernetes cluster deployment on Azure Stack, a certified Kubernetes Cloud Provider. Install Kubernetes using Azure Resource Manager templates generated by ACS-Engine on Azure Stack.

  • Azure Stack Infrastructure—compliance certification guidance
    by Azure service updates on November 1, 2018 at 7:00 pm

    We have created documentation to describe how Azure Stack infrastructure satisfies regulatory technical controls for PCI-DSS and CSA-CCM.

  • Logic Apps is ISO, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant
    by Azure service updates on July 18, 2017 at 5:05 pm

    The Logic Apps feature of Azure App Service is now ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, PCI DSS, SOC, and EU Model Clauses compliant.

  • Apache Kafka on HDInsight with Azure Managed Disks
    by Azure service updates on June 30, 2017 at 3:44 pm

    We're pleased to announce Apache Kafka with Azure Managed Disks Preview on the HDInsight platform. Users will now be able to deploy Kafka clusters with managed disks straight from the Azure portal, with no signup necessary.

  • Azure Backup for Windows Server system state
    by Azure service updates on June 14, 2017 at 10:54 pm

    Customers will now be able to to perform comprehensive, secure, and reliable Windows Server recoveries. We Will be extending the data backup capabilities of the Azure Backup agent so that it will now integrate with the Windows Server Backup feature, available natively on every Windows Server.

  • Azure Data Catalog is ISO, CSA STAR, HIPAA, EU Model Clauses compliant
    by Azure service updates on March 7, 2017 at 12:00 am

    Azure Data Catalog is ISO/IEC 27001, ISO/IEC 27018, HIPAA, CSA STAR, and EU Model Clauses compliant.

  • Azure compliance: Azure Cosmos DB certified for ISO 27001, HIPAA, and the EU Model Clauses
    by Azure service updates on March 25, 2016 at 10:00 am

    The Azure Cosmos DB team is excited to announce that Azure Cosmos DB is ISO 27001, HIPAA, and EU Model Clauses compliant.

  • Compliance updates for Azure public cloud
    by Azure service updates on March 16, 2016 at 9:24 pm

    We’re adding more certification coverage to our Azure portfolio, so regulated customers can take advantage of new services.

  • Protect and recover your production workloads in Azure
    by Azure service updates on October 2, 2014 at 5:00 pm

    With Azure Site Recovery, you can protect and recover your production workloads while saving on capital and operational expenditures.

  • ISO Certification expanded to include more Azure services
    by Azure service updates on January 17, 2014 at 1:00 am

    Azure ISO Certification expanded to include SQL Database, Active Directory, Traffic Manager, Web Sites, BizTalk Services, Media Services, Mobile Services, Service Bus, Multi-Factor Authentication, and HDInsight.

Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Top 100 Data Science and Data Analytics Interview Questions and Answers

Data Science Bias Variance Trade-off

Below and the Top 100 Data Science and Data Analytics Interview Questions and Answers dumps.

What is Data Science? 

Data Science is a blend of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data. How is this different from what statisticians have been doing for years? The answer lies in the difference between explaining and predicting: statisticians work a posteriori, explaining the results and designing a plan; data scientists use historical data to make predictions.

AWS Data analytics DAS-C01 Exam Prep
AWS Data analytics DAS-C01 Exam Prep
AWS Data analytics DAS-C01 on iOS pro

2022 AWS Cloud Practitioner Exam Preparation
 
 
 
 
 
 
 
AWS Data analytics DAS-C01 Exam Prep PRO App:
Very Similar to real exam, Countdown timer, Score card, Show/Hide Answers, Cheat Sheets, FlashCards, Detailed Answers and References
No ADS, Access All Quiz Detailed Answers, Reference and Score Card

How does data cleaning play a vital role in the analysis? 

Data cleaning can help in analysis because:

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

What is linear regression? What do the terms p-value, coefficient, and r-squared value mean? What is the significance of each of these components?

Reference  

Imagine you want to predict the price of a house. That will depend on some factors, called independent variables, such as location, size, year of construction… if we assume there is a linear relationship between these variables and the price (our dependent variable), then our price is predicted by the following function: Y = a + bX
The p-value in the table is the minimum I (the significance level) at which the coefficient is relevant. The lower the p-value, the more important is the variable in predicting the price. Usually we set a 5% level, so that we have a 95% confidentiality that our variable is relevant.
The p-value is used as an alternative to rejection points to provide the smallest level of significance at which the null hypothesis would be rejected. A smaller p-value means that there is stronger evidence in favor of the alternative hypothesis.
The coefficient value signifies how much the mean of the dependent variable changes given a one-unit shift in the independent variable while holding other variables in the model constant. This property of holding the other variables constant is crucial because it allows you to assess the effect of each variable in isolation from the others.
R squared (R2) is a statistical measure that represents the proportion of the variance for a dependent variable that’s explained by an independent variable or variables in a regression model.

Credit: Steve Nouri

What is sampling? How many sampling methods do you know? 

Reference

 

Data sampling is a statistical analysis technique used to select, manipulate and analyze a representative subset of data points to identify patterns and trends in the larger data set being examined. It enables data scientists, predictive modelers and other data analysts to work with a small, manageable amount of data about a statistical population to build and run analytical models more quickly, while still producing accurate findings.

Sampling can be particularly useful with data sets that are too large to efficiently analyze in full – for example, in big data analytics applications or surveys. Identifying and analyzing a representative sample is more efficient and cost-effective than surveying the entirety of the data or population.
An important consideration, though, is the size of the required data sample and the possibility of introducing a sampling error. In some cases, a small sample can reveal the most important information about a data set. In others, using a larger sample can increase the likelihood of accurately representing the data as a whole, even though the increased size of the sample may impede ease of manipulation and interpretation.
There are many different methods for drawing samples from data; the ideal one depends on the data set and situation. Sampling can be based on probability, an approach that uses random numbers that correspond to points in the data set to ensure that there is no correlation between points chosen for the sample. Further variations in probability sampling include:


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

Simple random sampling: Software is used to randomly select subjects from the whole population.
• Stratified sampling: Subsets of the data sets or population are created based on a common factor,
and samples are randomly collected from each subgroup. A sample is drawn from each strata (using a random sampling method like simple random sampling or systematic sampling).
o EX: In the image below, let’s say you need a sample size of 6. Two members from each
group (yellow, red, and blue) are selected randomly. Make sure to sample proportionally:
In this simple example, 1/3 of each group (2/6 yellow, 2/6 red and 2/6 blue) has been
sampled. If you have one group that’s a different size, make sure to adjust your
proportions. For example, if you had 9 yellow, 3 red and 3 blue, a 5-item sample would
consist of 3/9 yellow (i.e. one third), 1/3 red and 1/3 blue.
• Cluster sampling: The larger data set is divided into subsets (clusters) based on a defined factor, then a random sampling of clusters is analyzed. The sampling unit is the whole cluster; Instead of sampling individuals from within each group, a researcher will study whole clusters.
o EX: In the image below, the strata are natural groupings by head color (yellow, red, blue).
A sample size of 6 is needed, so two of the complete strata are selected randomly (in this
example, groups 2 and 4 are chosen).


Data Science Stratified Sampling - Cluster Sampling
Data Science Stratified Sampling – Cluster Sampling

– Cluster Sampling

  • Multistage sampling: A more complicated form of cluster sampling, this method also involves dividing the larger population into a number of clusters. Second-stage clusters are then broken out based on a secondary factor, and those clusters are then sampled and analyzed. This staging could continue as multiple subsets are identified, clustered and analyzed.
    • Systematic sampling: A sample is created by setting an interval at which to extract data from the larger population – for example, selecting every 10th row in a spreadsheet of 200 items to create a sample size of 20 rows to analyze.

Sampling can also be based on non-probability, an approach in which a data sample is determined and extracted based on the judgment of the analyst. As inclusion is determined by the analyst, it can be more difficult to extrapolate whether the sample accurately represents the larger population than when probability sampling is used.

Non-probability data sampling methods include:
• Convenience sampling: Data is collected from an easily accessible and available group.
• Consecutive sampling: Data is collected from every subject that meets the criteria until the predetermined sample size is met.
• Purposive or judgmental sampling: The researcher selects the data to sample based on predefined criteria.
• Quota sampling: The researcher ensures equal representation within the sample for all subgroups in the data set or population (random sampling is not used).

Quota sampling
Quota sampling

Once generated, a sample can be used for predictive analytics. For example, a retail business might use data sampling to uncover patterns about customer behavior and predictive modeling to create more effective sales strategies.

Credit: Steve Nouri

What are the assumptions required for linear regression?

There are four major assumptions:

There is a linear relationship between the dependent variables and the regressors, meaning the model you are creating actually fits the data,
• The errors or residuals of the data are normally distributed and independent from each other,
• There is minimal multicollinearity between explanatory variables, and
• Homoscedasticity. This means the variance around the regression line is the same for all values of the predictor variable.

What is a statistical interaction?

Reference: Statistical Interaction

Basically, an interaction is when the effect of one factor (input variable) on the dependent variable (output variable) differs among levels of another factor. When two or more independent variables are involved in a research design, there is more to consider than simply the “main effect” of each of the independent variables (also termed “factors”). That is, the effect of one independent variable on the dependent variable of interest may not be the same at all levels of the other independent variable. Another way to put this is that the effect of one independent variable may depend on the level of the other independent
variable. In order to find an interaction, you must have a factorial design, in which the two (or more) independent variables are “crossed” with one another so that there are observations at every
combination of levels of the two independent variables. EX: stress level and practice to memorize words: together they may have a lower performance. 

What is selection bias? 

Reference

Selection (or ‘sampling’) bias occurs when the sample data that is gathered and prepared for modeling has characteristics that are not representative of the true, future population of cases the model will see.
That is, active selection bias occurs when a subset of the data is systematically (i.e., non-randomly) excluded from analysis.

Selection bias is a kind of error that occurs when the researcher decides what has to be studied. It is associated with research where the selection of participants is not random. Therefore, some conclusions of the study may not be accurate.

The types of selection bias include:
Sampling bias: It is a systematic error due to a non-random sample of a population causing some members of the population to be less likely to be included than others resulting in a biased sample.
Time interval: A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.
Data: When specific subsets of data are chosen to support a conclusion or rejection of bad data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
Attrition: Attrition bias is a kind of selection bias caused by attrition (loss of participants)
discounting trial subjects/tests that did not run to completion.

What is an example of a data set with a non-Gaussian distribution?

Reference


The Gaussian distribution is part of the Exponential family of distributions, but there are a lot more of them, with the same sort of ease of use, in many cases, and if the person doing the machine learning has a solid grounding in statistics, they can be utilized where appropriate.

Binomial: multiple toss of a coin Bin(n,p): the binomial distribution consists of the probabilities of each of the possible numbers of successes on n trials for independent events that each have a probability of p of
occurring.

Bernoulli: Bin(1,p) = Be(p)
Poisson: Pois(λ)

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

 

What is a confusion matrix?

The confusion matrix is a 2X2 table that contains 4 outputs provided by the binary classifier.

A data set used for performance evaluation is called a test data set. It should contain the correct labels and predicted labels. The predicted labels will exactly the same if the performance of a binary classifier is perfect. The predicted labels usually match with part of the observed labels in real-world scenarios.
A binary classifier predicts all data instances of a test data set as either positive or negative. This produces four outcomes: TP, FP, TN, FN. Basic measures derived from the confusion matrix:

What is the difference between “long” and “wide” format data?

In the wide-format, a subject’s repeated responses will be in a single row, and each response is in a separate column. In the long-format, each row is a one-time point per subject. You can recognize data in wide format by the fact that columns generally represent groups (variables).

difference between “long” and “wide” format data

What do you understand by the term Normal Distribution?

Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell-shaped curve.

Data Science: Normal Distribution

The random variables are distributed in the form of a symmetrical, bell-shaped curve. Properties of Normal Distribution are as follows:

1. Unimodal (Only one mode)
2. Symmetrical (left and right halves are mirror images)
3. Bell-shaped (maximum height (mode) at the mean)
4. Mean, Mode, and Median are all located in the center
5. Asymptotic

What is correlation and covariance in statistics?

Correlation is considered or described as the best technique for measuring and also for estimating the quantitative relationship between two variables. Correlation measures how strongly two variables are related. Given two random variables, it is the covariance between both divided by the product of the two standard deviations of the single variables, hence always between -1 and 1.

correlation and covariance

Covariance is a measure that indicates the extent to which two random variables change in cycle. It explains the systematic relation between a pair of random variables, wherein changes in one variable reciprocal by a corresponding change in another variable.

correlation and covariance in statistics

What is the difference between Point Estimates and Confidence Interval? 

Point Estimation gives us a particular value as an estimate of a population parameter. Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters.

A confidence interval gives us a range of values which is likely to contain the population parameter. The confidence interval is generally preferred, as it tells us how likely this interval is to contain the population parameter. This likeliness or probability is called Confidence Level or Confidence coefficient and represented by 1 − ∝, where ∝ is the level of significance.

What is the goal of A/B Testing?

It is a hypothesis testing for a randomized experiment with two variables A and B.
The goal of A/B Testing is to identify any changes to the web page to maximize or increase the outcome of interest. A/B testing is a fantastic method for figuring out the best online promotional and marketing strategies for your business. It can be used to test everything from website copy to sales emails to search ads. An example of this could be identifying the click-through rate for a banner ad.

What is p-value?

When you perform a hypothesis test in statistics, a p-value can help you determine the strength of your results. p-value is the minimum significance level at which you can reject the null hypothesis. The lower the p-value, the more likely you reject the null hypothesis.

What do you understand by statistical power of sensitivity and how do you calculate it? 

Sensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, Random Forest etc.). Sensitivity = [ TP / (TP +TN)]

 

Why is Re-sampling done?

A Gentle Introduction to Statistical Sampling and Resampling

  • Sampling is an active process of gathering observations with the intent of estimating a population variable.
  • Resampling is a methodology of economically using a data sample to improve the accuracy and quantify the uncertainty of a population parameter. Resampling methods, in fact, make use of a nested resampling method.

Once we have a data sample, it can be used to estimate the population parameter. The problem is that we only have a single estimate of the population parameter, with little idea of the variability or uncertainty in the estimate. One way to address this is by estimating the population parameter multiple times from our data sample. This is called resampling. Statistical resampling methods are procedures that describe how to economically use available data to estimate a population parameter. The result can be both a more accurate estimate of the parameter (such as taking the mean of the estimates) and a quantification of the uncertainty of the estimate (such as adding a confidence interval).

Resampling methods are very easy to use, requiring little mathematical knowledge. A downside of the methods is that they can be computationally very expensive, requiring tens, hundreds, or even thousands of resamples in order to develop a robust estimate of the population parameter.

The key idea is to resample from the original data — either directly or via a fitted model — to create replicate datasets, from which the variability of the quantiles of interest can be assessed without longwinded and error-prone analytical calculation. Because this approach involves repeating the original data analysis procedure with many replicate sets of data, these are sometimes called computer-intensive methods. Each new subsample from the original data sample is used to estimate the population parameter. The sample of estimated population parameters can then be considered with statistical tools in order to quantify the expected value and variance, providing measures of the uncertainty of the
estimate. Statistical sampling methods can be used in the selection of a subsample from the original sample.

A key difference is that process must be repeated multiple times. The problem with this is that there will be some relationship between the samples as observations that will be shared across multiple subsamples. This means that the subsamples and the estimated population parameters are not strictly identical and independently distributed. This has implications for statistical tests performed on the sample of estimated population parameters downstream, i.e. paired statistical tests may be required. 

Two commonly used resampling methods that you may encounter are k-fold cross-validation and the bootstrap.

  • Bootstrap. Samples are drawn from the dataset with replacement (allowing the same sample to appear more than once in the sample), where those instances not drawn into the data sample may be used for the test set.
  • k-fold Cross-Validation. A dataset is partitioned into k groups, where each group is given the opportunity of being used as a held out test set leaving the remaining groups as the training set. The k-fold cross-validation method specifically lends itself to use in the evaluation of predictive models that are repeatedly trained on one subset of the data and evaluated on a second held-out subset of the data.  

Resampling is done in any of these cases:

  • Estimating the accuracy of sample statistics by using subsets of accessible data or drawing randomly with replacement from a set of data points
  • Substituting labels on data points when performing significance tests
  • Validating models by using random subsets (bootstrapping, cross-validation)

What are the differences between over-fitting and under-fitting?

In statistics and machine learning, one of the most common tasks is to fit a model to a set of training data, so as to be able to make reliable predictions on general untrained data.

In overfitting, a statistical model describes random error or noise instead of the underlying relationship.
Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data.

Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data.
Such a model too would have poor predictive performance.

 

How to combat Overfitting and Underfitting?

To combat overfitting:
1. Add noise
2. Feature selection
3. Increase training set
4. L2 (ridge) or L1 (lasso) regularization; L1 drops weights, L2 no
5. Use cross-validation techniques, such as k folds cross-validation
6. Boosting and bagging
7. Dropout technique
8. Perform early stopping
9. Remove inner layers
To combat underfitting:
1. Add features
2. Increase time of training


What is regularization? Why is it useful?

Regularization is the process of adding tuning parameter (penalty term) to a model to induce smoothness in order to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1 (Lasso – |∝|) or L2 (Ridge – ∝2). The model predictions should then minimize the loss function calculated on the regularized training set.

What Is the Law of Large Numbers? 

It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.

What Are Confounding Variables?

In statistics, a confounder is a variable that influences both the dependent variable and independent variable.

If you are researching whether a lack of exercise leads to weight gain:
lack of exercise = independent variable
weight gain = dependent variable
A confounding variable here would be any other variable that affects both of these variables, such as the age of the subject.

What is Survivorship Bias?

It is the logical error of focusing aspects that support surviving some process and casually overlooking those that did not work because of their lack of prominence. This can lead to wrong conclusions in numerous different means. For example, during a recession you look just at the survived businesses, noting that they are performing poorly. However, they perform better than the rest, which is failed, thus being removed from the time series.

Explain how a ROC curve works?

The ROC curve is a graphical representation of the contrast between true positive rates and false positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity (true positive rate) and false positive rate.

Data Science ROC Curve

What is TF/IDF vectorization?

TF-IDF is short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining.

Data Science TF IDF Vectorization

The TF-IDF value increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.

Python or R – Which one would you prefer for text analytics?

We will prefer Python because of the following reasons:
• Python would be the best option because it has Pandas library that provides easy to use data structures and high-performance data analysis tools.
• R is more suitable for machine learning than just text analysis.
• Python performs faster for all types of text analytics.

How does data cleaning play a vital role in the analysis? 

Data cleaning can help in analysis because:

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

Differentiate between univariate, bivariate and multivariate analysis. 

Univariate analyses are descriptive statistical analysis techniques which can be differentiated based on one variable involved at a given point of time. For example, the pie charts of sales based on territory involve only one variable and can the analysis can be referred to as univariate analysis.

The bivariate analysis attempts to understand the difference between two variables at a time as in a scatterplot. For example, analyzing the volume of sale and spending can be considered as an example of bivariate analysis.

Multivariate analysis deals with the study of more than two variables to understand the effect of variables on the responses.

Explain Star Schema

It is a traditional database schema with a central table. Satellite tables map IDs to physical names or descriptions and can be connected to the central fact table using the ID fields; these tables are known as lookup tables and are principally useful in real-time applications, as they save a lot of memory. Sometimes star schemas involve several layers of summarization to recover information faster.

What is Cluster Sampling?

Cluster sampling is a technique used when it becomes difficult to study the target population spread across a wide area and simple random sampling cannot be applied. Cluster Sample is a probability sample where each sampling unit is a collection or cluster of elements.

For example, a researcher wants to survey the academic performance of high school students in Japan. He can divide the entire population of Japan into different clusters (cities). Then the researcher selects a number of clusters depending on his research through simple or systematic random sampling.

What is Systematic Sampling? 

Systematic sampling is a statistical technique where elements are selected from an ordered sampling frame. In systematic sampling, the list is progressed in a circular manner so once you reach the end of the list, it is progressed from the top again. The best example of systematic sampling is equal probability method.

What are Eigenvectors and Eigenvalues? 

Eigenvectors are used for understanding linear transformations. In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix. Eigenvectors are the directions along which a particular linear transformation acts by flipping, compressing or stretching.
Eigenvalue can be referred to as the strength of the transformation in the direction of eigenvector or the factor by which the compression occurs.

Give Examples where a false positive is important than a false negative?

Let us first understand what false positives and false negatives are:

  • False Positives are the cases where you wrongly classified a non-event as an event a.k.a Type I error
  • False Negatives are the cases where you wrongly classify events as non-events, a.k.a Type II error.

Example 1: In the medical field, assume you have to give chemotherapy to patients. Assume a patient comes to that hospital and he is tested positive for cancer, based on the lab prediction but he actually doesn’t have cancer. This is a case of false positive. Here it is of utmost danger to start chemotherapy on this patient when he actually does not have cancer. In the absence of cancerous cell, chemotherapy will do certain damage to his normal healthy cells and might lead to severe diseases, even cancer.

Example 2: Let’s say an e-commerce company decided to give $1000 Gift voucher to the customers whom they assume to purchase at least $10,000 worth of items. They send free voucher mail directly to 100 customers without any minimum purchase condition because they assume to make at least 20% profit on sold items above $10,000. Now the issue is if we send the $1000 gift vouchers to customers who have not actually purchased anything but are marked as having made $10,000 worth of purchase

Give Examples where a false negative important than a false positive? And vice versa?

Example 1 FN: What if Jury or judge decides to make a criminal go free?

Example 2 FN: Fraud detection.

Example 3 FP: customer voucher use promo evaluation: if many used it and actually if was not true, promo sucks

Give Examples where both false positive and false negatives are equally important? 

In the Banking industry giving loans is the primary source of making money but at the same time if your repayment rate is not good you will not make any profit, rather you will risk huge losses.
Banks don’t want to lose good customers and at the same point in time, they don’t want to acquire bad customers. In this scenario, both the false positives and false negatives become very important to measure.

What is the Difference between a Validation Set and a Test Set?

A Training Set:
• to fit the parameters i.e. weights

A Validation set:
• part of the training set
• for parameter selection
• to avoid overfitting

A Test set:
• for testing or evaluating the performance of a trained machine learning model, i.e. evaluating the
predictive power and generalization.

What is cross-validation?

Reference: k-fold cross validation 

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation. Mainly used in backgrounds where the objective is forecast, and one wants to estimate how accurately a model will accomplish in practice.

Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

The general procedure is as follows:
1. Shuffle the dataset randomly.
2. Split the dataset into k groups
3. For each unique group:
a. Take the group as a hold out or test data set
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model
4. Summarize the skill of the model using the sample of model evaluation scores

Data Science Cross Validation

There is an alternative in Scikit-Learn called Stratified k fold, in which the split is shuffled to make it sure you have a representative sample of each class and a k fold in which you may not have the assurance of it (not good with a very unbalanced dataset).

What is Machine Learning?

Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data. You select a model to train and then manually perform feature extraction. Used to devise complex models and algorithms that lend themselves to a prediction which in commercial use is known as predictive analytics.

What is Supervised Learning? 

Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples.

Algorithms: Support Vector Machines, Regression, Naive Bayes, Decision Trees, K-nearest Neighbor Algorithm and Neural Networks

Example: If you built a fruit classifier, the labels will be “this is an orange, this is an apple and this is a banana”, based on showing the classifier examples of apples, oranges and bananas.

What is Unsupervised learning?

Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.

Algorithms: Clustering, Anomaly Detection, Neural Networks and Latent Variable Models

Example: In the same example, a fruit clustering will categorize as “fruits with soft skin and lots of dimples”, “fruits with shiny hard skin” and “elongated yellow fruits”.

What are the various Machine Learning algorithms?

Machine Learning Algorithms

What is “Naive” in a Naive Bayes?

Reference: Naive Bayes Classifier on Wikipedia

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable y and dependent feature vector X1through Xn:

Machine Learning Algorithms Naive Bayes
Machine Learning Algorithms Naive Bayes

What is PCA (Principal Component Analysis)? When do you use it?

Reference: PCA on wikipedia

Principal component analysis (PCA) is a statistical method used in Machine Learning. It consists in projecting data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension.

The process works as following. We define a matrix A with > rows (the single observations of a dataset – in a tabular format, each single row) and @ columns, our features. For this matrix we construct a variable space with as many dimensions as there are features. Each feature represents one coordinate axis. For each feature, the length has been standardized according to a scaling criterion, normally by scaling to unit variance. It is determinant to scale the features to a common scale, otherwise the features with a greater magnitude will weigh more in determining the principal components. Once plotted all the observations and computed the mean of each variable, that mean will be represented by a point in the center of our plot (the center of gravity). Then, we subtract each observation with the mean, shifting the coordinate system with the center in the origin. The best fitting line resulting is the line that best accounts for the shape of the point swarm. It represents the maximum variance direction in the data. Each observation may be projected onto this line in order to get a coordinate value along the PC-line. This value is known as a score. The next best-fitting line can be similarly chosen from directions perpendicular to the first.
Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components.

Machine Learning Algorithms PCA

PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.

SVM (Support Vector Machine)  algorithm

Reference: SVM on wikipedia

Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of supportvector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So, we
choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. The best hyper plane that divides the data is H3.

  • SVMs are helpful in text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
  • Some methods for shallow semantic parsing are based on support vector machines.
  • Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.
  • Classification of satellite data like SAR data using supervised SVM.
  • Hand-written characters can be recognized using SVM.

What are the support vectors in SVM? 

Machine Learning Algorithms Support Vectors

In the diagram, we see that the sketched lines mark the distance from the classifier (the hyper plane) to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin.

To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, max (0, 1 – yi(w∙ xi − b)). This function is zero if x lies on the correct side of the margin. For data on the wrong side of the margin, the function’s value is proportional to the distance from the margin. 

What are the different kernels in SVM?

There are four types of kernels in SVM.
1. LinearKernel
2. Polynomial kernel
3. Radial basis kernel
4. Sigmoid kernel

What are the most known ensemble algorithms? 

Reference: Ensemble Algorithms

The most popular trees are: AdaBoost, Random Forest, and  eXtreme Gradient Boosting (XGBoost).

AdaBoost is best used in a dataset with low noise, when computational complexity or timeliness of results is not a main concern and when there are not enough resources for broader hyperparameter tuning due to lack of time and knowledge of the user.

Random forests should not be used when dealing with time series data or any other data where look-ahead bias should be avoided, and the order and continuity of the samples need to be ensured. This algorithm can handle noise relatively well, but more knowledge from the user is required to adequately tune the algorithm compared to AdaBoost.

The main advantages of XGBoost is its lightning speed compared to other algorithms, such as AdaBoost, and its regularization parameter that successfully reduces variance. But even aside from the regularization parameter, this algorithm leverages a learning rate (shrinkage) and subsamples from the features like random forests, which increases its ability to generalize even further. However, XGBoost is more difficult to understand, visualize and to tune compared to AdaBoost and random forests. There is a multitude of hyperparameters that can be tuned to increase performance.

What is Deep Learning?

Deep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the neurons in the human brain.

Deep Learning

What is the difference between machine learning and deep learning?

Deep learning & Machine learning: what’s the difference?

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorized in the following four categories.
1. Supervised machine learning,
2. Semi-supervised machine learning,
3. Unsupervised machine learning,
4. Reinforcement learning.

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

Machine Learning vs Deep Learning

• The main difference between deep learning and machine learning is due to the way data is
presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks).

• Machine learning algorithms are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets. However, when the result is incorrect, there is a need to “teach them”. Because machine learning algorithms require bulleted data, they are not suitable for solving complex queries that involve a huge amount of data.

• Deep learning networks do not require human intervention, as multilevel layers in neural
networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes. However, even they can be wrong if the data quality is not good enough.

• Data decides everything. It is the quality of the data that ultimately determines the quality of the result.

• Both of these subsets of AI are somehow connected to data, which makes it possible to represent a certain form of “intelligence.” However, you should be aware that deep learning requires much more data than a traditional machine learning algorithm. The reason for this is that deep learning networks can identify different elements in neural network layers only when more than a million data points interact. Machine learning algorithms, on the other hand, are capable of learning by pre-programmed criteria.

What is the reason for the popularity of Deep Learning in recent times? 

Now although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons:
• The increase in the amount of data generated through various sources
• The growth in hardware resources required to run these models
GPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously

What is reinforcement learning?

Reinforcement Learning allows to take actions to max cumulative reward. It learns by trial and error through reward/penalty system. Environment rewards agent so by time agent makes better decisions.
Ex: robot=agent, maze=environment. Used for complex tasks (self-driving cars, game AI).

RL is a series of time steps in a Markov Decision Process:

1. Environment: space in which RL operates
2. State: data related to past action RL took
3. Action: action taken
4. Reward: number taken by agent after last action
5. Observation: data related to environment: can be visible or partially shadowed

What are Artificial Neural Networks?

Artificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input, so the network generates the best possible result without needing to redesign the output criteria.

Artificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.

Machine Learning Artificial Neural Network

How Are Weights Initialized in a Network?

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

What Is the Cost Function? 

Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions.
The most known one is the mean sum of squared errors.

Machine Learning Cost Function

What Are Hyperparameters?

With neural networks, you’re usually working with hyperparameters once the data is formatted correctly.
A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, batches, etc.).

What Will Happen If the Learning Rate is Set inaccurately (Too Low or Too High)? 

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.
If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

What Is The Difference Between Epoch, Batch, and Iteration in Deep Learning? 

Epoch – Represents one iteration over the entire dataset (everything put into the training model).
Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.
Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).

What Are the Different Layers on CNN?

Reference: Layers of CNN 

Machine Learning Layers of CNN

The Convolutional neural networks are regularized versions of multilayer perceptron (MLP). They were developed based on the working of the neurons of the animal visual cortex.

The objective of using the CNN:

The idea is that you give the computer this array of numbers and it will output numbers that describe the probability of the image being a certain class (.80 for a cat, .15 for a dog, .05 for a bird, etc.). It works similar to how our brain works. When we look at a picture of a dog, we can classify it as such if the picture has identifiable features such as paws or 4 legs. In a similar way, the computer is able to perform image classification by looking for low-level features such as edges and curves and then building up to more abstract concepts through a series of convolutional layers. The computer uses low-level features obtained at the initial levels to generate high-level features such as paws or eyes to identify the object.

There are four layers in CNN:
1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.
2. Activation Layer (ReLU Layer) – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map. It follows each convolutional layer.
3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map. Stride = how much you slide, and you get the max of the n x n matrix
4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.

Q60: What Is Pooling on CNN, and How Does It Work?

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

What are Recurrent Neural Networks (RNNs)? 

Reference: RNNs

RNNs are a type of artificial neural networks designed to recognize the pattern from the sequence of data such as Time series, stock market and government agencies etc.

Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a ‘series’ type input with no predetermined size.

Machine Learning RNN

RNNs are designed to take a series of input with no predetermined limit on size. One could ask what’s\ the big deal, I can call a regular NN repeatedly too?

Machine Learning Regular NN

Sure can, but the ‘series’ part of the input means something. A single input item from the series is related to others and likely has an influence on its neighbors. Otherwise it’s just “many” inputs, not a “series” input (duh!).
Recurrent Neural Network remembers the past and its decisions are influenced by what it has learnt from the past. Note: Basic feed forward networks “remember” things too, but they remember things they learnt during training. For example, an image classifier learns what a “1” looks like during training and then uses that knowledge to classify things in production.
While RNNs learn similarly while training, in addition, they remember things learnt from prior input(s) while generating output(s). RNNs can take one or more input vectors and produce one or more output vectors and the output(s) are influenced not just by weights applied on inputs like a regular NN, but also by a “hidden” state vector representing the context based on prior input(s)/output(s). So, the same input could produce a different output depending on previous inputs in the series.

Machine Learning Vanilla NN

In summary, in a vanilla neural network, a fixed size input vector is transformed into a fixed size output vector. Such a network becomes “recurrent” when you repeatedly apply the transformations to a series of given input and produce a series of output vectors. There is no pre-set limitation to the size of the vector. And, in addition to generating the output which is a function of the input and hidden state, we update the hidden state itself based on the input and use it in processing the next input.

What is the role of the Activation Function?

The Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.

Machine Learning libraries for various purposes

Machine Learning Libraries

What is an Auto-Encoder?

Reference: Auto-Encoder

Auto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabeled input which is then encoded to reconstruct the input. 

An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties.
Autoencoders are effectively used for solving many applied problems, from face recognition to acquiring the semantic meaning of words.

Machine Learning Auto_Encoder

What is a Boltzmann Machine?

Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimize the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.

Machine Learning Boltzmann Machine

What Is Dropout and Batch Normalization?

Dropout is a technique of dropping out hidden and visible nodes of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network. It used to avoid overfitting, as it increases the capacity of generalization.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one

Why Is TensorFlow the Most Preferred Library in Deep Learning?

TensorFlow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and PyTorch. TensorFlow supports both CPU and GPU computing devices.

What is Tensor in TensorFlow?

A tensor is a mathematical object represented as arrays of higher dimensions. Think of a n-D matrix. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

What is the Computational Graph?

Everything in a TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates. Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

What is logistic regression?

• Logistic Regression models a function of the target variable as a linear combination of the predictors, then converts this function into a fitted value in the desired range.

• Binary or Binomial Logistic Regression can be understood as the type of Logistic Regression that deals with scenarios wherein the observed outcomes for dependent variables can be only in binary, i.e., it can have only two possible types.

• Multinomial Logistic Regression works in scenarios where the outcome can have more than two possible types – type A vs type B vs type C – that are not in any particular order.

No alternative text description for this image

No alternative text description for this image

Credit:

How is logistic regression done? 

Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).

Explain the steps in making a decision tree. 

1. Take the entire data set as input
2. Calculate entropy of the target variable, as well as the predictor attributes
3. Calculate your information gain of all attributes (we gain information on sorting different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch is finalized
For example, let’s say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:

Machine Learning Decision Tree

It is clear from the decision tree that an offer is accepted if:
• Salary is greater than $50,000
• The commute is less than an hour
• Coffee is offered

How do you build a random forest model?

A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.

Steps to build a random forest model:

1. Randomly select ; features from a total of = features where  k<< m
2. Among the ; features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for > times to create > number of trees

Differentiate between univariate, bivariate, and multivariate analysis. 

Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.

Machine Learning Univariate Data

The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.

Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.

Bivariate data

Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.

Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.

Example: data for house price prediction
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.

What are the feature selection methods used to select the right variables?

There are two main methods for feature selection.
Filter Methods
This involves:
• Linear discrimination analysis
• ANOVA
• Chi-Square
The best analogy for selecting features is “bad data in, bad answer out.” When we’re limiting or selecting the features, it’s all about cleaning up the data coming in.

Wrapper Methods
This involves:
• Forward Selection: We test one feature at a time and keep adding them until we get a good fit
• Backward Selection: We test all the features and start removing them to see what works
better
• Recursive Feature Elimination: Recursively looks through all the different features and how they pair together

Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.

You are given a data set consisting of variables with more than 30 percent missing values. How will you deal with them? 

If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.

For smaller data sets, we can impute missing values with the mean, median, or average of the rest of the data using pandas data frame in python. There are different ways to do so, such as: df.mean(), df.fillna(mean)

Other option of imputation is using KNN for numeric or classification values (as KNN just uses k closest values to impute the missing value).

How will you calculate the Euclidean distance in Python?

plot1 = [1,3]

plot2 = [2,5]

The Euclidean distance can be calculated as follows:

euclidean_distance = sqrt((plot1[0]-plot2[0])**2 + (plot1[1]- plot2[1])**2)

What are dimensionality reduction and its benefits? 

Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.

This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there’s no point in storing a value in two different units (meters and inches).

How should you maintain a deployed model?

The steps to maintain a deployed model are (CREM):

1. Monitor: constant monitoring of all models is needed to determine their performance accuracy.
When you change something, you want to figure out how your changes are going to affect things.
This needs to be monitored to ensure it’s doing what it’s supposed to do.
2. Evaluate: evaluation metrics of the current model are calculated to determine if a new algorithm is needed.
3. Compare: the new models are compared to each other to determine which model performs the best.
4. Rebuild: the best performing model is re-built on the current state of data.

How can a time-series data be declared as stationery?

  1. The mean of the series should not be a function of time.
Machine Learning Stationery Time Series Data: Mean
  1. The variance of the series should not be a function of time. This property is known as homoscedasticity.
Machine Learning Stationery Time Series Data: Variance
  1. The covariance of the i th term and the (i+m) th term should not be a function of time.
Machine Learning Stationery Time Series Data: CoVariance

‘People who bought this also bought…’ recommendations seen on Amazon are a result of which algorithm?

The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.

What is a Generative Adversarial Network?

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine. The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.
The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Machine Learning GAN illustration

• There is a noise vector coming into the forger who is generating fake wine.
• Here the forger acts as a Generator.
• The shop owner acts as a Discriminator.
• The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine.
The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:
1. Generator
2. Discriminator

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images. The ultimate aim is to make the discriminator learn to identify real and fake images.

You are given a dataset on cancer detection. You have built a classification model and achieved an accuracy of 96 percent. Why shouldn’t you be happy with your model performance? What can you do about it?

Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection and can greatly improve a patient’s prognosis.

Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.

We want to predict the probability of death from heart disease based on three risk factors: age, gender, and blood cholesterol level. What is the most appropriate algorithm for this case?

The most appropriate algorithm for this case is logistic regression.

After studying the behavior of a population, you have identified four specific individual types that are valuable to your study. You would like to find all users who are most similar to each individual type. Which algorithm is most appropriate for this study? 

As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering is the most appropriate algorithm for this study.

You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be relevant. What else must be true? 

{grape, apple} must be a frequent itemset.

Your organization has a website where visitors randomly receive one of two coupons. It is also possible that visitors to the website will not receive a coupon. You have been asked to determine if offering a coupon to website visitors has any impact on their purchase decisions. Which analysis method should you use?

One-way ANOVA: in statistics, one-way analysis of variance is a technique that can be used to compare means of two or more samples. This technique can be used only for numerical response data, the “Y”, usually one variable, and numerical or categorical input data, the “X”, always one variable, hence “oneway”.
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit
theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.

What are the feature vectors?

A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that’s easy to analyze.

What is root cause analysis?

Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.

Do gradient descent methods always converge to similar points?

They do not, because in some cases, they reach a local minimum or a local optimum point. You would not reach the global optimum point. This is governed by the data and the starting conditions.

 In your choice of language, write a program that prints the numbers ranging from one to 50. But for multiples of three, print “Fizz” instead of the number and for the multiples of five, print “Buzz.” For numbers which are multiples of both three and five, print “FizzBuzz.”

Python Fibonacci algorithm

What are the different Deep Learning Frameworks?

PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.
TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team.
Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.

Data Sciences and Data Mining Glossary

Credit: Dr. Matthew North
Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Archived Data: Data which have been copied out of a live production database and into a data warehouse or other permanent system where they can be accessed and analyzed, but not by primary operational business systems.

Association Rules: A data mining methodology which compares attributes in a data set across all observations to identify areas where two or more attributes are frequently found together. If their frequency of coexistence is high enough throughout the data set, the association of those attributes can be said to be a rule.

Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values.

Binomial: A data type for any set of values that is limited to one of two numeric options.

Binominal: In RapidMiner, the data type binominal is used instead of binomial, enabling both numerical and character-based sets of values that are limited to one of two options.

Business Understanding: See Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Case Sensitive: A situation where a computer program recognizes the uppercase version of a letter or word as being different from the lowercase version of the same letter or word.

Classification: One of the two main goals of conducting data mining activities, with the other being prediction. Classification creates groupings in a data set based on the similarity of the observations’ attributes. Some data mining methodologies, such as decision trees, can predict an observation’s classification.

Code: Code is the result of a computer worker’s work. It is a set of instructions, typed in a specific grammar and syntax, that a computer can understand and execute. According to Lawrence Lessig, it is one of four methods humans can use to set and control boundaries for behavior when interacting with computer systems.

Coefficient: In data mining, a coefficient is a value that is calculated based on the values in a data set that can be used as a multiplier or as an indicator of the relative strength of some attribute or component in a data mining model.

Column: See Attribute. In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Comma Separated Values (CSV): A common text-based format for data sets where the divisions between attributes (columns of data) are indicated by commas. If commas occur naturally in some of the values in the data set, a CSV file will misunderstand these to be attribute separators, leading to misalignment of attributes.

Conclusion: See Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Confidence (Alpha) Level: A value, usually 5% or 0.05, used to test for statistical significance in some data mining methods. If statistical significance is found, a data miner can say that there is a 95% likelihood that a calculated or predicted value is not a false positive.

Confidence Percent: In predictive data mining, this is the percent of calculated confidence that the model has calculated for one or more possible predicted values. It is a measure for the likelihood of false positives in predictions. Regardless of the number of possible predicted values, their collective confidence percentages will always total to 100%.

Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Correlation: A statistical measure of the strength of affinity, based on the similarity of observational values, of the attributes in a data set. These can be positive (as one attribute’s values go up or down, so too does the correlated attribute’s values); or negative (correlated attributes’ values move in opposite directions). Correlations are indicated by coefficients which fall on a scale between -1 (complete negative correlation) and 1 (complete positive correlation), with 0 indicating no correlation at all between two attributes.

CRISP-DM: An acronym for Cross-Industry Standard Process for Data Mining. This process was jointly developed by several major multi-national corporations around the turn of the new millennium in order to standardize the approach to mining data. It is comprised of six cyclical steps: Business (Organizational) Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, Deployment.

Cross-validation: A method of statistically evaluating a training data set for its likelihood of producing false positives in a predictive data mining model.

Data: Data are any arrangement and compilation of facts. Data may be structured (e.g. arranged in columns (attributes) and rows (observations)), or unstructured (e.g. paragraphs of text, computer log file).

Data Analysis: The process of examining data in a repeatable and structured way in order to extract meaning, patterns or messages from a set of data.

Data Mart: A location where data are stored for easy access by a broad range of people in an organization. Data in a data mart are generally archived data, enabling analysis in a setting that does not impact live operations.

Data Mining: A computational process of analyzing data sets, usually large in nature, using both statistical and logical methods, in order to uncover hidden, previously unknown, and interesting patterns that can inform organizational decision making.

Data Preparation: The third in the six steps of CRISP-DM. At this stage, the data miner ensures that the data to be mined are clean and ready for mining. This may include handling outliers or other inconsistent data, dealing with missing values, reducing attributes or observations, setting attribute roles for modeling, etc.

Data Set: Any compilation of data that is suitable for analysis.

Data Type: In a data set, each attribute is assigned a data type based on the kind of data stored in the attribute. There are many data types which can be generalized into one of three areas: Character (Text) based; Numeric; and Date/Time. Within these categories, RapidMiner has several data types. For example, in the Character area, RapidMiner has Polynominal, Binominal, etc.; and in the Numeric area it has Real, Integer, etc.

Data Understanding: The second in the six steps of CRISP-DM. At this stage, the data miner seeks out sources of data in the organization, and works to collect, compile, standardize, define and document the data. The data miner develops a comprehension of where the data have come from, how they were collected and what they mean.

Data Warehouse: A large-scale repository for archived data which are available for analysis. Data in a data warehouse are often stored in multiple formats (e.g. by week, month, quarter and year), facilitating large scale analyses at higher speeds. The data warehouse is populated by extracting data from operational systems so that analyses do not interfere with live business operations.

Database: A structured organization of facts that is organized such that the facts can be reliably and repeatedly accessed. The most common type of database is a relational database, in which facts (data) are arranged in tables of columns and rows. The data are then accessed using a query language, usually SQL (Structured Query Language), in order to extract meaning from the tables.

Decision Tree: A data mining methodology where leaves and nodes are generated to construct a predictive tree, whereby a data miner can see the attributes which are most predictive of each possible outcome in a target (label) attribute.

Denormalization: The process of removing relational organization from data, reintroducing redundancy into the data, but simultaneously eliminating the need for joins in a relational database, enabling faster querying.

Dependent Variable (Attribute): The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Deployment: The sixth and final of the six steps of CRISP-DM. At this stage, the data miner takes the results of data mining activities and puts them into practice in the organization. The data miner watches closely and collects data to determine if the deployment is successful and ethical. Deployment can happen in stages, such as through pilot programs before a full-scale roll out.

Descartes’ Rule of Change: An ethical framework set forth by Rene Descartes which states that if an action cannot be taken repeatedly, it cannot be ethically taken even once.

Design Perspective: The view in RapidMiner where a data miner adds operators to a data mining stream, sets those operators’ parameters, and runs the model.

Discriminant Analysis: A predictive data mining model which attempts to compare the values of all observations across all attributes and identify where natural breaks occur from one category to another, and then predict which category each observation in the data set will fall into.

Ethics: A set of moral codes or guidelines that an individual develops to guide his or her decision making in order to make fair and respectful decisions and engage in right actions. Ethical standards are higher than legally required minimums.

Evaluation: The fifth of the six steps of CRISP-DM. At this stage, the data miner reviews the results of the data mining model, interprets results and determines how useful they are. He or she may also conduct an investigation into false positives or other potentially misleading results.

False Positive: A predicted value that ends up not being correct.

Field: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Frequency Pattern: A recurrence of the same, or similar, observations numerous times in a single data set.

Fuzzy Logic: A data mining concept often associated with neural networks where predictions are made using a training data set, even though some uncertainty exists regarding the data and a model’s predictions.

Gain Ratio: One of several algorithms used to construct decision tree models.

Gini Index: An algorithm created by Corrodo Gini that can be used to generate decision tree models.

Heterogeneity: In statistical analysis, this is the amount of variety found in the values of an attribute.

Inconsistent Data: These are values in an attribute in a data set that are out-of-the-ordinary among the whole set of values in that attribute. They can be statistical outliers, or other values that simply don’t make sense in the context of the ‘normal’ range of values for the attribute. They are generally replaced or remove during the Data Preparation phase of CRISP-DM.

Independent Variable (Attribute): These are attributes that act on the dependent attribute (the target, or label). They are used to help predict the label in a predictive model.

Jittering: The process of adding a small, random decimal to discrete values in a data set so that when they are plotted in a scatter plot, they are slightly apart from one another, enabling the analyst to better see clustering and density.

Join: The process of connecting two or more tables in a relational database together so that their attributes can be accessed in a single query, such as in a view.

Kant’s Categorical Imperative: An ethical framework proposed by Immanuel Kant which states that if everyone cannot ethically take some action, then no one can ethically take that action.

k-Means Clustering: A data mining methodology that uses the mean (average) values of the attributes in a data set to group each observation into a cluster of other observations whose values are most similar to the mean for that cluster.

Label: In RapidMiner, this is the role that must be set in order to use an attribute as the dependent, or target, attribute in a predictive model.

Laws: These are regulatory statutes which have associated consequences that are established and enforced by a governmental agency. According to Lawrence Lessig, these are one of the four methods for establishing boundaries to define and regulate social behavior.

Leaf: In a decision tree data mining model, this is the terminal end point of a branch, indicating the predicted outcome for observations whose values follow that branch of the tree.

Linear Regression: A predictive data mining method which uses the algebraic formula for calculating the slope of a line in order to predict where a given observation will likely fall along that line.

Logistic Regression: A predictive data mining method which uses a quadratic formula to predict one of a set of possible outcomes, along with a probability that the prediction will be the actual outcome.

Markets: A socio-economic construct in which peoples’ buying, selling, and exchanging behaviors define the boundaries of acceptable or unacceptable behavior. Lawrence Lessig offers this as one of four methods for defining the parameters of appropriate behavior.

Mean: See Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values. 

Median: With the Mean and Mode, this is one of three generally used Measures of Central Tendency. It is an arithmetic way of defining what ‘normal’ looks like in a numeric attribute. It is calculated by rank ordering the values in an attribute and finding the one in the middle. If there are an even number of observations, the two in the middle are averaged to find the median.

Meta Data: These are facts that describe the observational values in an attribute. Meta data may include who collected the data, when, why, where, how, how often; and usually include some descriptive statistics such as the range, average, standard deviation, etc.

Missing Data: These are instances in an observation where one or more attributes does not have a value. It is not the same as zero, because zero is a value. Missing data are like Null values in a database, they are either unknown or undefined. These are usually replaced or removed during the Data Preparation phase of CRISP-DM.

Mode: With Mean and Median, this is one of three common Measures of Central Tendency. It is the value in an attribute which is the most common. It can be numerical or text. If an attribute contains two or more values that appear an equal number of times and more than any other values, then all are listed as the mode, and the attribute is said to be Bimodal or Multimodal.

Model: A computer-based representation of real-life events or activities, constructed upon the basis of data which represent those events.

Name (Attribute): This is the text descriptor of each attribute in a data set. In RapidMiner, the first row of an imported data set should be designated as the attribute name, so that these are not interpreted as the first observation in the data set.

Neural Network: A predictive data mining methodology which tries to mimic human brain processes by comparing the values of all attributes in a data set to one another through the use of a hidden layer of nodes. The frequencies with which the attribute values match, or are strongly similar, create neurons which become stronger at higher frequencies of similarity.

n-Gram: In text mining, this is a combination of words or word stems that represent a phrase that may have more meaning or significance that would the single word or stem.

Node: A terminal or mid-point in decision trees and neural networks where an attribute branches or forks away from other terminal or branches because the values represented at that point have become significantly different from all other values for that attribute.

Normalization: In a relational database, this is the process of breaking data out into multiple related tables in order to reduce redundancy and eliminate multivalued dependencies.

Null: The absence of a value in a database. The value is unrecorded, unknown, or undefined. See Missing Values.

Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Online Analytical Processing (OLAP): A database concept where data are collected and organized in a way that facilitates analysis, rather than practical, daily operational work. Evaluating data in a data warehouse is an example of OLAP. The underlying structure that collects and holds the data makes analysis faster, but would slow down transactional work.

Online Transaction Processing (OLTP): A database concept where data are collected and organized in a way that facilitates fast and repeated transactions, rather than broader analytical work. Scanning items being purchased at a cash register is an example of OLTP. The underlying structure that collects and holds the data makes transactions faster, but would slow down analysis.

Operational Data: Data which are generated as a result of day-to-day work (e.g. the entry of work orders for an electrical service company).

Operator: In RapidMiner, an operator is any one of more than 100 tools that can be added to a data mining stream in order to perform some function. Functions range from adding a data set, to setting an attribute’s role, to applying a modeling algorithm. Operators are connected into a stream by way of ports connected by splines.

Organizational Data: These are data which are collected by an organization, often in aggregate or summary format, in order to address a specific question, tell a story, or answer a specific question. They may be constructed from Operational Data, or added to through other means such as surveys, questionnaires or tests.

Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Parameters: In RapidMiner, these are the settings that control values and thresholds that an operator will use to perform its job. These may be the attribute name and role in a Set Role operator, or the algorithm the data miner desires to use in a model operator.

Port: The input or output required for an operator to perform its function in RapidMiner. These are connected to one another using splines.

Prediction: The target, or label, or dependent attribute that is generated by a predictive model, usually for a scoring data set in a model.

Premise: See Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Privacy: The concept describing a person’s right to be let alone; to have information about them kept away from those who should not, or do not need to, see it. A data miner must always respect and safeguard the privacy of individuals represented in the data he or she mines.

Professional Code of Conduct: A helpful guide or documented set of parameters by which an individual in a given profession agrees to abide. These are usually written by a board or panel of experts and adopted formally by a professional organization.

Query: A method of structuring a question, usually using code, that can be submitted to, interpreted, and answered by a computer.

Record: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Relational Database: A computerized repository, comprised of entities that relate to one another through keys. The most basic and elemental entity in a relational database is the table, and tables are made up of attributes. One or more of these attributes serves as a key that can be matched (or related) to a corresponding attribute in another table, creating the relational effect which reduces data redundancy and eliminates multivalued dependencies.

Repository: In RapidMiner, this is the place where imported data sets are stored so that they are accessible for modeling.

Results Perspective: The view in RapidMiner that is seen when a model has been run. It is usually comprised of two or more tabs which show meta data, data in a spreadsheet-like view, and predictions and model outcomes (including graphical representations where applicable).

Role (Attribute): In a data mining model, each attribute must be assigned a role. The role is the part the attribute plays in the model. It is usually equated to serving as an independent variable (regular), or dependent variable (label).

Row: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Sample: A subset of an entire data set, selected randomly or in a structured way. This usually reduces a data set down, allowing models to be run faster, especially during development and proof-of-concept work on a model.

Scoring Data: A data set with the same attributes as a training data set in a predictive model, with the exception of the label. The training data set, with the label defined, is used to create a predictive model, and that model is then applied to a scoring data set possessing the same attributes in order to predict the label for each scoring observation.

Social Norms: These are the sets of behaviors and actions that are generally tolerated and found to be acceptable in a society. According to Lawrence Lessig, these are one of four methods of defining and regulating appropriate behavior.

Spline: In RapidMiner, these lines connect the ports between operators, creating the stream of a data mining model.

Standard Deviation: One of the most common statistical measures of how dispersed the values in an attribute are. This measure can help determine whether or not there are outliers (a common type of inconsistent data) in a data set.

Standard Operating Procedures: These are organizational guidelines that are documented and shared with employees which help to define the boundaries for appropriate and acceptable behavior in the business setting. They are usually created and formally adopted by a group of leaders in the organization, with input from key stakeholders in the organization.

Statistical Significance: In statistically-based data mining activities, this is the measure of whether or not the model has yielded any results that are mathematically reliable enough to be used. Any model lacking statistical significance should not be used in operational decision making.

Stemming: In text mining, this is the process of reducing like-terms down into a single, common token (e.g. country, countries, country’s, countryman, etc. → countr).

Stopwords: In text mining, these are small words that are necessary for grammatical correctness, but which carry little meaning or power in the message of the text being mined. These are often articles, prepositions or conjunctions, such as ‘a’, ‘the’, ‘and’, etc., and are usually removed in the Process Document operator’s sub-process.

Stream: This is the string of operators in a data mining model, connected through the operators’ ports via splines, that represents all actions that will be taken on a data set in order to mine it.

Structured Query Language (SQL): The set of codes, reserved keywords and syntax defined by the American National Standards Institute used to create, manage and use relational databases.

Sub-process: In RapidMiner, this is a stream of operators set up to apply a series of actions to all inputs connected to the parent operator.

Support Percent: In an association rule data mining model, this is the percent of the time that when the antecedent is found in an observation, the consequent is also found. Since this is calculated as the number of times the two are found together divided by the total number of they could have been found together, the Support Percent is the same for reciprocal rules.

Table: In data collection, a table is a grid of columns and rows, where in general, the columns are individual attributes in the data set, and the rows are observations across those attributes. Tables are the most elemental entity in relational databases.

Target Attribute: See Label; Dependent Variable: The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Technology: Any tool or process invented by mankind to do or improve work.

Text Mining: The process of data mining unstructured text-based data such as essays, news articles, speech transcripts, etc. to discover patterns of word or phrase usage to reveal deeper or previously unrecognized meaning.

Token (Tokenize): In text mining, this is the process of turning words in the input document(s) into attributes that can be mined.

Training Data: In a predictive model, this data set already has the label, or dependent variable defined, so that it can be used to create a model which can be applied to a scoring data set in order to generate predictions for the latter.

Tuple: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Variable: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

View: A type of pseudo-table in a relational database which is actually a named, stored query. This query runs against one or more tables, retrieving a defined number of attributes that can then be referenced as if they were in a table in the database. Views can limit users’ ability to see attributes to only those that are relevant and/or approved for those users to see. They can also speed up the query process because although they may contain joins, the key columns for the joins can be indexed and cached, making the view’s query run faster than it would if it were not stored as a view. Views can be useful in data mining as data miners can be given read-only access to the view, upon which they can build data mining models, without having to have broader administrative rights on the database itself.

What is the Central Limit Theorem and why is it important?

An Introduction to the Central Limit Theorem

Answer: Suppose that we are interested in estimating the average height among all people. Collecting data for every person in the world is impractical, bordering on impossible. While we can’t obtain a height measurement from everyone in the population, we can still sample some people. The question now becomes, what can we say about the average height of the entire population given a single sample.
The Central Limit Theorem addresses this question exactly. Formally, it states that if we sample from a population using a sufficiently large sample size, the mean of the samples (also known as the sample population) will be normally distributed (assuming true random sampling), the mean tending to the mean of the population and variance equal to the variance of the population divided by the size of the sampling.
What’s especially important is that this will be true regardless of the distribution of the original population.

Central Limit Theorem
Central Limit Theorem: Population Distribution

As we can see, the distribution is pretty ugly. It certainly isn’t normal, uniform, or any other commonly known distribution. In order to sample from the above distribution, we need to define a sample size, referred to as N. This is the number of observations that we will sample at a time. Suppose that we choose
N to be 3. This means that we will sample in groups of 3. So for the above population, we might sample groups such as [5, 20, 41], [60, 17, 82], [8, 13, 61], and so on.
Suppose that we gather 1,000 samples of 3 from the above population. For each sample, we can compute its average. If we do that, we will have 1,000 averages. This set of 1,000 averages is called a sampling distribution, and according to Central Limit Theorem, the sampling distribution will approach a normal distribution as the sample size N used to produce it increases. Here is what our sample distribution looks like for N = 3.

Simple Mean Distribution with N=3
Simple Mean Distribution with N=3

As we can see, it certainly looks uni-modal, though not necessarily normal. If we repeat the same process with a larger sample size, we should see the sampling distribution start to become more normal. Let’s repeat the same process again with N = 10. Here is the sampling distribution for that sample size.

Sample Mean Distribution with N = 10
Sample Mean Distribution with N = 10

Credit: Steve Nouri

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

The Best Medium-Hard Data Analyst SQL Interview Questions

compiled by Google Data Analyst Zachary Thomas!

The Best Medium-Hard Data Analyst SQL Interview Questions

Self-Join Practice Problems: MoM Percent Change

Context: Oftentimes it’s useful to know how much a key metric, such as monthly active users, changes between months.
Say we have a table logins in the form:

SQL Self-Join Practice Mom Percent Change

Task: Find the month-over-month percentage change for monthly active users (MAU).

Solution:
(This solution, like other solution code blocks you will see in this doc, contains comments about SQL syntax that may differ between flavors of SQL or other comments about the solutions as listed)

SQL MoM Solution2

Tree Structure Labeling with SQL

Context: Say you have a table tree with a column of nodes and a column corresponding parent nodes

Task: Write SQL such that we label each node as a “leaf”, “inner” or “Root” node, such that for the nodes above we get:

A solution which works for the above example will receive full credit, although you can receive extra credit for providing a solution that is generalizable to a tree of any depth (not just depth = 2, as is the case in the example above).

Solution: This solution works for the example above with tree depth = 2, but is not generalizable beyond that.

An alternate solution, that is generalizable to any tree depth:
Acknowledgement: this more generalizable solution was contributed by Fabian Hofmann

An alternate solution, without explicit joins:
Acknowledgement: William Chargin on 5/2/20 noted that WHERE parent IS NOT NULL is needed to make this solution return Leaf instead of NULL.

Retained Users Per Month with SQL

Acknowledgement: this problem is adapted from SiSense’s “Using Self Joins to Calculate Your Retention, Churn, and Reactivation Metrics” blog post

PART 1:
Context: Say we have login data in the table logins:

Task: Write a query that gets the number of retained users per month. In this case, retention for a given month is defined as the number of users who logged in that month who also logged in the immediately previous month.

Solution:

PART 2:

Task: Now we’ll take retention and turn it on its head: Write a query to find how many users last month did not come back this month. i.e. the number of churned users

Solution:

Note that there are solutions to this problem that can use LEFT or RIGHT joins.

PART 3:
Context: You now want to see the number of active users this month who have been reactivated — in other words, users who have churned but this month they became active again. Keep in mind a user can reactivate after churning before the previous month. An example of this could be a user active in February (appears in logins), no activity in March and April, but then active again in May (appears in logins), so they count as a reactivated user for May .

Task: Create a table that contains the number of reactivated users per month.

Solution:

Cumulative Sums with SQL

Acknowledgement: This problem was inspired by Sisense’s “Cash Flow modeling in SQL” blog post
Context: Say we have a table transactions in the form:

Where cash_flow is the revenues minus costs for each day.

Task: Write a query to get cumulative cash flow for each day such that we end up with a table in the form below:

Solution using a window function (more effcient):

Alternative Solution (less efficient):

Rolling Averages with SQL

Acknowledgement: This problem is adapted from Sisense’s “Rolling Averages in MySQL and SQL Server” blog post
Note: there are different ways to compute rolling/moving averages. Here we’ll use a preceding average which means that the metric for the 7th day of the month would be the average of the preceding 6 days and that day itself.
Context: Say we have table signups in the form:

Task: Write a query to get 7-day rolling (preceding) average of daily sign ups

Solution1:

Solution2: (using windows, more efficient)

Multiple Join Conditions in SQL

Acknowledgement: This problem was inspired by Sisense’s “Analyzing Your Email with SQL” blog post
Context: Say we have a table emails that includes emails sent to and from zach@g.com:

Task: Write a query to get the response time per email (id) sent to zach@g.com . Do not include ids that did not receive a response from zach@g.com. Assume each email thread has a unique subject. Keep in mind a thread may have multiple responses back-and-forth between zach@g.com and another email address.

Solution:

SQL Window Function Practice Problems

#1: Get the ID with the highest value
Context: Say we have a table salaries with data on employee salary and department in the following format:

Task: Write a query to get the empno with the highest salary. Make sure your solution can handle ties!

#2: Average and rank with a window function (multi-part)

PART 1:
Context: Say we have a table salaries in the format:

Task: Write a query that returns the same table, but with a new column that has average salary per depname. We would expect a table in the form:

Solution:

PART 2:
Task: Write a query that adds a column with the rank of each employee based on their salary within their department, where the employee with the highest salary gets the rank of 1. We would expect a table in the form:

Solution:

Predictive Modelling Questions

Source:  datasciencehandbook.me

 

1-  (Given a Dataset) Analyze this dataset and give me a model that can predict this response variable. 

2-  What could be some issues if the distribution of the test data is significantly different than the distribution of the training data?

3-  What are some ways I can make my model more robust to outliers?

4-  What are some differences you would expect in a model that minimizes squared error, versus a model that minimizes absolute error? In which cases would each error metric be appropriate?

5- What error metric would you use to evaluate how good a binary classifier is? What if the classes are imbalanced? What if there are more than 2 groups?

6-  What are various ways to predict a binary response variable? Can you compare two of them and tell me when one would be more appropriate? What’s the difference between these? (SVM, Logistic Regression, Naive Bayes, Decision Tree, etc.)

7-  What is regularization and where might it be helpful? What is an example of using regularization in a model?

8-  Why might it be preferable to include fewer predictors over many?

9-  Given training data on tweets and their retweets, how would you predict the number of retweets of a given tweet after 7 days after only observing 2 days worth of data?

10-  How could you collect and analyze data to use social media to predict the weather?

11- How would you construct a feed to show relevant content for a site that involves user interactions with items?

12- How would you design the people you may know feature on LinkedIn or Facebook?

13- How would you predict who someone may want to send a Snapchat or Gmail to?

14- How would you suggest to a franchise where to open a new store?

15- In a search engine, given partial data on what the user has typed, how would you predict the user’s eventual search query?

16- Given a database of all previous alumni donations to your university, how would you predict which recent alumni are most likely to donate?

17- You’re Uber and you want to design a heatmap to recommend to drivers where to wait for a passenger. How would you approach this?

18- How would you build a model to predict a March Madness bracket?

19- You want to run a regression to predict the probability of a flight delay, but there are flights with delays of up to 12 hours that are really messing up your model. How can you address this?

 

Data Analysis Interview Questions

Source:  datasciencehandbook.me

1- (Given a Dataset) Analyze this dataset and tell me what you can learn from it.

2- What is R2? What are some other metrics that could be better than R2 and why?

3- What is the curse of dimensionality?

4- Is more data always better?

5- What are advantages of plotting your data before performing analysis?

6- How can you make sure that you don’t analyze something that ends up meaningless?

7- What is the role of trial and error in data analysis? What is the the role of making a hypothesis before diving in?

8- How can you determine which features are the most important in your model?

9- How do you deal with some of your predictors being missing?

10- You have several variables that are positively correlated with your response, and you think combining all of the variables could give you a good prediction of your response. However, you see that in the multiple linear regression, one of the weights on the predictors is negative. What could be the issue?

11- Let’s say you’re given an unfeasible amount of predictors in a predictive modeling task. What are some ways to make the prediction more feasible?

12- Now you have a feasible amount of predictors, but you’re fairly sure that you don’t need all of them. How would you perform feature selection on the dataset?

13- Your linear regression didn’t run and communicates that there are an infinite number of best estimates for the regression coefficients. What could be wrong?

14- You run your regression on different subsets of your data, and find that in each subset, the beta value for a certain variable varies wildly. What could be the issue here?

15- What is the main idea behind ensemble learning? If I had many different models that predicted the same response variable, what might I want to do to incorporate all of the models? Would you expect this to perform better than an individual model or worse?

16- Given that you have wifi data in your office, how would you determine which rooms and areas are underutilized and overutilized?

17- How could you use GPS data from a car to determine the quality of a driver?

18- Given accelerometer, altitude, and fuel usage data from a car, how would you determine the optimum acceleration pattern to drive over hills?

19- Given position data of NBA players in a season’s games, how would you evaluate a basketball player’s defensive ability?

20- How would you quantify the influence of a Twitter user?

21- Given location data of golf balls in games, how would construct a model that can advise golfers where to aim?

22- You have 100 mathletes and 100 math problems. Each mathlete gets to choose 10 problems to solve. Given data on who got what problem correct, how would you rank the problems in terms of difficulty?

23- You have 5000 people that rank 10 sushis in terms of saltiness. How would you aggregate this data to estimate the true saltiness rank in each sushi?

24-Given data on congressional bills and which congressional representatives co-sponsored the bills, how would you determine which other representatives are most similar to yours in voting behavior? How would you evaluate who is the most liberal? Most republican? Most bipartisan?

25- How would you come up with an algorithm to detect plagiarism in online content?

26- You have data on all purchases of customers at a grocery store. Describe to me how you would program an algorithm that would cluster the customers into groups. How would you determine the appropriate number of clusters to include?

27- Let’s say you’re building the recommended music engine at Spotify to recommend people music based on past listening history. How would you approach this problem?

28- Explain how boosted tree models work in simple language.

29- What sort of data sampling techniques would you use for a low signal temporal classification problem?

30- How would you deal with categorical variables and what considerations would you keep in mind?

31- How would you identify leakage in your machine learning model?

32- How would you apply a machine learning model in a live experiment?

33- What is difference between sensitivity, precision and recall? When would you use these over accuracy, name a few situations

34- What’s the importance of train, val, test splits and how would you split or create your dataset – how would this impact your model metrics?

35- What are some simple ways to optimise your model and how would you know you’ve reached a stable and performant model?

Statistical Inference Interview Questions

Source:  datasciencehandbook.me

1- In an A/B test, how can you check if assignment to the various buckets was truly random?

2- What might be the benefits of running an A/A test, where you have two buckets who are exposed to the exact same product?

3- What would be the hazards of letting users sneak a peek at the other bucket in an A/B test?

4- What would be some issues if blogs decide to cover one of your experimental groups?

5- How would you conduct an A/B test on an opt-in feature?

6- How would you run an A/B test for many variants, say 20 or more?

7- How would you run an A/B test if the observations are extremely right-skewed?

8- I have two different experiments that both change the sign-up button to my website. I want to test them at the same time. What kinds of things should I keep in mind?

9- What is a p-value? What is the difference between type-1 and type-2 error?

10- You are AirBnB and you want to test the hypothesis that a greater number of photographs increases the chances that a buyer selects the listing. How would you test this hypothesis?

11- How would you design an experiment to determine the impact of latency on user engagement?

12- What is maximum likelihood estimation? Could there be any case where it doesn’t exist?

13- What’s the difference between a MAP, MOM, MLE estimator? In which cases would you want to use each?

14- What is a confidence interval and how do you interpret it?

15- What is unbiasedness as a property of an estimator? Is this always a desirable property when performing inference? What about in data analysis or predictive modeling?

Product Metric Interview Questions

Source:  datasciencehandbook.me

1- What would be good metrics of success for an advertising-driven consumer product? (Buzzfeed, YouTube, Google Search, etc.) A service-driven consumer product? (Uber, Flickr, Venmo, etc.)

2- What would be good metrics of success for a productivity tool? (Evernote, Asana, Google Docs, etc.) A MOOC? (edX, Coursera, Udacity, etc.)

3- What would be good metrics of success for an e-commerce product? (Etsy, Groupon, Birchbox, etc.) A subscription product? (Netflix, Birchbox, Hulu, etc.) Premium subscriptions? (OKCupid, LinkedIn, Spotify, etc.)

4- What would be good metrics of success for a consumer product that relies heavily on engagement and interaction? (Snapchat, Pinterest, Facebook, etc.) A messaging product? (GroupMe, Hangouts, Snapchat, etc.)

5- What would be good metrics of success for a product that offered in-app purchases? (Zynga, Angry Birds, other gaming apps)

6- A certain metric is violating your expectations by going down or up more than you expect. How would you try to identify the cause of the change?

7- Growth for total number of tweets sent has been slow this month. What data would you look at to determine the cause of the problem?

8- You’re a restaurant and are approached by Groupon to run a deal. What data would you ask from them in order to determine whether or not to do the deal?

9- You are tasked with improving the efficiency of a subway system. Where would you start?

10- Say you are working on Facebook News Feed. What would be some metrics that you think are important? How would you make the news each person gets more relevant?

11- How would you measure the impact that sponsored stories on Facebook News Feed have on user engagement? How would you determine the optimum balance between sponsored stories and organic content on a user’s News Feed?

12- You are on the data science team at Uber and you are asked to start thinking about surge pricing. What would be the objectives of such a product and how would you start looking into this?

13- Say that you are Netflix. How would you determine what original series you should invest in and create?

14- What kind of services would find churn (metric that tracks how many customers leave the service) helpful? How would you calculate churn?

15- Let’s say that you’re are scheduling content for a content provider on television. How would you determine the best times to schedule content?

Programming Questions

Source:  datasciencehandbook.me

1- Write a function to calculate all possible assignment vectors of 2n users, where n users are assigned to group 0 (control), and n users are assigned to group 1 (treatment).

2- Given a list of tweets, determine the top 10 most used hashtags.

3- Program an algorithm to find the best approximate solution to the knapsack problem1 in a given time.

4- Program an algorithm to find the best approximate solution to the travelling salesman problem2 in a given time.

5- You have a stream of data coming in of size n, but you don’t know what n is ahead of time. Write an algorithm that will take a random sample of k elements. Can you write one that takes O(k) space?

6- Write an algorithm that can calculate the square root of a number.

7- Given a list of numbers, can you return the outliers?

8- When can parallelism make your algorithms run faster? When could it make your algorithms run slower?

9- What are the different types of joins? What are the differences between them?

10- Why might a join on a subquery be slow? How might you speed it up?

11- Describe the difference between primary keys and foreign keys in a SQL database.

12- Given a COURSES table with columns course_id and course_name, a FACULTY table with columns faculty_id and faculty_name, and a COURSE_FACULTY table with columns faculty_id and course_id, how would you return a list of faculty who teach a course given the name of a course?

13- Given a IMPRESSIONS table with ad_id, click (an indicator that the ad was clicked), and date, write a SQL query that will tell me the click-through-rate of each ad by month.

14- Write a query that returns the name of each department and a count of the number of employees in each:
EMPLOYEES containing: Emp_ID (Primary key) and Emp_Name
EMPLOYEE_DEPT containing: Emp_ID (Foreign key) and Dept_ID (Foreign key)
DEPTS containing: Dept_ID (Primary key) and Dept_Name

Probability Questions

1- Bobo the amoeba has a 25%, 25%, and 50% chance of producing 0, 1, or 2 offspring, respectively. Each of Bobo’s descendants also have the same probabilities. What is the probability that Bobo’s lineage dies out?

2- In any 15-minute interval, there is a 20% probability that you will see at least one shooting star. What is the probability that you see at least one shooting star in the period of an hour?

3- How can you generate a random number between 1 – 7 with only a die?

4- How can you get a fair coin toss if someone hands you a coin that is weighted to come up heads more often than tails?

5- You have an 50-50 mixture of two normal distributions with the same standard deviation. How far apart do the means need to be in order for this distribution to be bimodal?

6- Given draws from a normal distribution with known parameters, how can you simulate draws from a uniform distribution?

7- A certain couple tells you that they have two children, at least one of which is a girl. What is the probability that they have two girls?

8- You have a group of couples that decide to have children until they have their first girl, after which they stop having children. What is the expected gender ratio of the children that are born? What is the expected number of children each couple will have?

9- How many ways can you split 12 people into 3 teams of 4?

10- Your hash function assigns each object to a number between 1:10, each with equal probability. With 10 objects, what is the probability of a hash collision? What is the expected number of hash collisions? What is the expected number of hashes that are unused.

11- You call 2 UberX’s and 3 Lyfts. If the time that each takes to reach you is IID, what is the probability that all the Lyfts arrive first? What is the probability that all the UberX’s arrive first?

12- I write a program should print out all the numbers from 1 to 300, but prints out Fizz instead if the number is divisible by 3, Buzz instead if the number is divisible by 5, and FizzBuzz if the number is divisible by 3 and 5. What is the total number of numbers that is either Fizzed, Buzzed, or FizzBuzzed?

13- On a dating site, users can select 5 out of 24 adjectives to describe themselves. A match is declared between two users if they match on at least 4 adjectives. If Alice and Bob randomly pick adjectives, what is the probability that they form a match?

14- A lazy high school senior types up application and envelopes to n different colleges, but puts the applications randomly into the envelopes. What is the expected number of applications that went to the right college

15- Let’s say you have a very tall father. On average, what would you expect the height of his son to be? Taller, equal, or shorter? What if you had a very short father?

16- What’s the expected number of coin flips until you get two heads in a row? What’s the expected number of coin flips until you get two tails in a row?

17- Let’s say we play a game where I keep flipping a coin until I get heads. If the first time I get heads is on the nth coin, then I pay you 2n-1 dollars. How much would you pay me to play this game?

18- You have two coins, one of which is fair and comes up heads with a probability 1/2, and the other which is biased and comes up heads with probability 3/4. You randomly pick coin and flip it twice, and get heads both times. What is the probability that you picked the fair coin?

19- You have a 0.1% chance of picking up a coin with both heads, and a 99.9% chance that you pick up a fair coin. You flip your coin and it comes up heads 10 times. What’s the chance that you picked up the fair coin, given the information that you observed?

Reference: 800 Data Science Questions & Answers doc by

 

Direct download here

Reference: 164 Data Science Interview Questions and Answers by 365 Data Science

Download it here

DataWarehouse Cheat Sheet

What are Differences between Supervised and Unsupervised Learning?

Supervised UnSupervised
Input data is labelled Input data is unlabeled
Split in training/validation/test No split
Used for prediction Used for analysis
Classification and Regression Clustering, dimension reduction, and density estimation

Python Cheat Sheet

Download it here

Data Sciences Cheat Sheet

Download it here

Panda Cheat Sheet

Download it here

Learn SQL with Practical Exercises

SQL is definitely one of the most fundamental skills needed to be a data scientist.

This is a comprehensive handbook that can help you to learn SQL (Structured Query Language), which could be directly downloaded here

Credit: D Armstrong

Data Visualization: A comprehensive VIP Matplotlib Cheat sheet

Credit: Matplotlib

Download it here

Power BI for Intermediates

Download it here

Credit: Soheil Bakhshi and Bruce Anderson

How to get a job in data science – a semi-harsh Q/A guide.

Python Frameworks for Data Science

Natural Language Processing (NLP) is one of the top areas today.

Some of the applications are:

  • Reading printed text and correcting reading errors
  • Find and replace
  • Correction of spelling mistakes
  • Development of aids
  • Text summarization
  • Language translation
  • and many more.

NLP is a great area if you are planning to work in the area of artificial intelligence.

High Level Look of AI/ML Algorithms

Best Machine Learning Algorithms for Classification: Pros and Cons

Business Analytics in one image

Curated papers, articles, and blogs on data science & machine learning in production from companies like Google, LinkedIn, Uber, Facebook Twitter, Airbnb, and …

  1. Data Quality
  2. Data Engineering
  3. Data Discovery
  4. Feature Stores
  5. Classification
  6. Regression
  7. Forecasting
  8. Recommendation
  9. Search & Ranking
  10. Embeddings
  11. Natural Language Processing
  12. Sequence Modelling
  13. Computer Vision
  14. Reinforcement Learning
  15. Anomaly Detection
  16. Graph
  17. Optimization
  18. Information Extraction
  19. Weak Supervision
  20. Generation
  21. Audio
  22. Validation and A/B Testing
  23. Model Management
  24. Efficiency
  25. Ethics
  26. Infra
  27. MLOps Platforms
  28. Practices
  29. Team Structure
  30. Fails

How to get a job in data science – a semi-harsh Q/A guide.

HOW DO I GET A JOB IN DATA SCIENCE?

Hey you. Yes you, person asking “how do I get a job in data science/analytics/MLE/AI whatever BS job with data in the title?”. I got news for you. There are two simple rules to getting one of these jobs.

Have experience.

Don’t have no experience.

There are approximately 1000 entry level candidates who think they’re qualified because they did a 24 week bootcamp for every entry level job. I don’t need to be a statistician to tell you your odds of landing one of these aren’t great.

HOW DO I GET EXPERIENCE?

Are you currently employed? If not, get a job. If you are, figure out a way to apply data science in your job, then put it on your resume. Mega bonus points here if you can figure out a way to attribute a dollar value to your contribution. Talk to your supervisor about career aspirations at year-end/mid-year reviews. Maybe you’ll find a way to transfer to a role internally and skip the whole resume ignoring phase. Alternatively, network. Be friends with people who are in the roles you want to be in, maybe they’ll help you find a job at their company.

WHY AM I NOT GETTING INTERVIEWS?

IDK. Maybe you don’t have the required experience. Maybe there are 500+ other people applying for the same position. Maybe your resume stinks. If you’re getting 1/20 response rate, you’re doing great. Quit whining.

IS XYZ DEGREE GOOD FOR DATA SCIENCE?

Does your degree involve some sort of non-remedial math higher than college algebra? Does your degree involve taking any sort of programming classes? If yes, congratulations, your degree will pass most base requirements for data science. Is it the best? Probably not, unless you’re CS or some really heavy math degree where half your classes are taught in Greek letters. Don’t come at me with those art history and underwater basket weaving degrees unless you have multiple years experience doing something else.

SHOULD I DO XYZ BOOTCAMP/MICROMASTERS?

Do you have experience? No? This ain’t gonna help you as much as you think it might. Are you experienced and want to learn more about how data science works? This could be helpful.

SHOULD I DO XYZ MASTER’S IN DATA SCIENCE PROGRAM?

Congratulations, doing a Master’s is usually a good idea and will help make you more competitive as a candidate. Should you shell out 100K for one when you can pay 10K for one online? Probably not. In all likelihood, you’re not gonna get $90K in marginal benefit from the more expensive program. Pick a known school (probably avoid really obscure schools, the name does count for a little) and you’ll be fine. Big bonus here if you can sucker your employer into paying for it.

WILL XYZ CERTIFICATE HELP MY RESUME?

Does your certificate say “AWS” or “AZURE” on it? If not, no.

DO I NEED TO KNOW XYZ MATH TOPIC?

Yes. Stop asking. Probably learn probability, be familiar with linear algebra, and understand what the hell a partial derivative is. Learn how to test hypotheses. Ultimately you need to know what the heck is going on math-wise in your predictions otherwise the company is going to go bankrupt and it will be all your fault.

WHAT IF I’M BAD AT MATH?

Do some studying or something. MIT opencourseware has a bunch of free recorded math classes. If you want to learn some Linear Algebra, Gilbert Strang is your guy.

WHAT PROGRAMMING LANGUAGES SHOULD I LEARN?

STOP ASKING THIS QUESTION. I CAN GOOGLE “HOW TO BE A DATA SCIENTIST” AND EVERY SINGLE GARBAGE TDS ARTICLE WILL TELL YOU SQL AND PYTHON/R. YOU’RE LUCKY YOU DON’T HAVE TO DEAL WITH THE JOY OF SEGMENTATION FAULTS TO RUN A SIMPLE LINEAR REGRESSION.

SHOULD I LEARN PYTHON OR R?

Both. Python is more widely used and tends to be more general purpose than R. R is better at statistics and data analysis, but is a bit more niche. Take your pick to start, but ultimately you’re gonna want to learn both you slacker.

SHOULD I MAKE A PORTFOLIO?

Yes. And don’t put some BS housing price regression, iris classification, or titanic survival project on it either. Next question.

WHAT SHOULD I DO AS A PROJECT?

IDK what are you interested in? If you say twitter sentiment stock market prediction go sit in the corner and think about what you just said. Every half brained first year student who can pip install sklearn and do model.fit() has tried unsuccessfully to predict the stock market. The efficient market hypothesis is a thing for a reason. There are literally millions of other free datasets out there you have one of the most powerful search engines at your fingertips to go find them. Pick something you’re interested in, find some data, and analyze it.

DO I NEED TO BE GOOD WITH PEOPLE? (courtesy of /u/bikeskata)

Yes! First, when you’re applying, no one wants to work with a weirdo. You should be able to have a basic conversation with people, and they shouldn’t come away from it thinking you’ll follow them home and wear their skin as a suit. Once you get a job, you’ll be interacting with colleagues, and you’ll need them to care about your analysis. Presumably, there are non-technical people making decisions you’ll need to bring in as well. If you can’t explain to a moderately intelligent person why they should care about the thing that took you 3 days (and cost $$$ in cloud computing costs), you probably won’t have your position for long. You don’t need to be the life of the party, but you should be pleasant to be around.

Credit: u/save_the_panda_bears

Top 75 Data Science Youtube channel

1- Alex The Analyst
2- Tina Huang
3- Abhishek Thakur
4- Michael Galarnyk
5- How to Get an Analytics Job
6- Ken Jee
7- Data Professor
8- Nicholas Renotte
9- KNN Clips
10- Ternary Data: Data Engineering Consulting
11- AI Basics with Mike
12- Matt Brattin
13- Chronic Coder
14- Intersnacktional
15- Jenny Tumay
16- Coding Professor
17- DataTalksClub
18- Ken’s Nearest Neighbors Podcast
19- Karolina Sowinska
20- Lander Analytics
21- Lights OnData
22- CodeEmporium
23- Andreas Mueller
24- Nate at StrataScratch
25- Kaggle
26- Data Interview Pro
27- Jordan Harrod
28- Leo Isikdogan
29- Jacob Amaral
30- Bukola
31- AndrewMoMoney
32- Andreas Kretz
33- Python Programmer
34- Machine Learning with Phil
35- Art of Visualization
36- Machine Learning University
 

Data Science and Data Analytics Breaking News – Top Stories

  • Using pytest to test feature logic, transformations, and feature pipelines
    by /u/jpdowlin (Data Science) on May 28, 2022 at 1:00 pm

    submitted by /u/jpdowlin [link] [comments]

  • Data science accountability / project team group
    by /u/hedgehogist (Data Science) on May 28, 2022 at 12:31 pm

    Hey guys, I'm interested in creating a Discord group for people who are interested to learn more about data science (and business analytics) and want to get into the field. We can discuss how to build up our profiles, share resources on different topics and tools, discuss ideas and bounce them off one another, talk about our experiences, and best of all – potentially find people to collaborate with for personal projects! If you're interested, feel free to join the server here: https://discord.gg/2jHFnVvF Note: Feel free to join even if you are an experienced data scientist who believes they can give some valuable advice to those of us starting out in the field. Everyone is welcome! submitted by /u/hedgehogist [link] [comments]

  • Help my mom! (In need of advice)
    by /u/SaturnPubz (Data Science) on May 28, 2022 at 11:27 am

    Hello. Recently my mom decided to study again, something related to data science. I’d like to help her but I’m not sure how since I don’t know much about it. I’d like some advice for a good book and if possible the pros and cons of that specific book. Thanks in advance:) submitted by /u/SaturnPubz [link] [comments]

  • Data Science Freelance Job Market Demand. Saturated with talents or still have lots of opportunities.
    by /u/Independent-Savings1 (Data Science) on May 28, 2022 at 10:41 am

    Hello Data Scientists, If you are doing data science freelance work or have an agency, do you think there is more than enough talent in the market or that the field is still red hot? I've been searching for a niche to jump into the area but could not find one. I am searching on google on "how to find a niche" and "how to research market demand." Initially, I am practicing data collection with scrapy and other libraries in python. And for next, thinking of jumping into creating 'data pipelines' and 'data extraction automation on clouds.' What do you think? Are these types of skills in demand in the market? Please give your thoughts below. I appreciate any help you can provide. submitted by /u/Independent-Savings1 [link] [comments]

  • Popular Airline Passenger Routes (2015)
    by /u/marklit (DataIsBeautiful) on May 28, 2022 at 10:03 am

    submitted by /u/marklit [link] [comments]

  • As a DS, how would you help a local observatory with their data?
    by /u/rainbowenough (Data Science) on May 28, 2022 at 9:22 am

    Hello, I'm not a data scientist but an aspiring data analyst that's looking for internships and I know this is not subreddit for it but hear me out.. I got approached by a small (as only 3 people working there) local space observatory after i have posted in my local country's job seeking subreddit asking me to help them out with their data. My first impression is that the founder didn't know what data analysis is and thought I know all about data in general so i decided to give him some of my insights in a meeting tomorrow, I would like to know more from data scientists on how would you deal with their data.. Their data is in the form of optical images and he wants to use this data in a way to raise awareness and make new generation enjoy space and take careers and hobbies in it. That's the main reason i want to help this observatory out. What do you think the observatory needs? I suggested him to hire a team of DS to help modeling his data. But i don't think he yet understands the scale of this program, so im requesting your help to help him out with a bit of suggestions. PS: im not asking you to solve the whole problem but rather if you were a business founder that wants to benefit your company with your data, how would you proceed? and how hiring a DS would change their data? ​ he said : " I have been looking into big data, data analysis, and machine learning, to see what are the possibilities to come up with a program that can do all this more efficiently. I realize that this would take time for the program to achieve the same accuracy as a human, however, Google did not start out with the intelligent algorithm it is today! :-)" submitted by /u/rainbowenough [link] [comments]

  • [OC] Liverpool and Real Madrid's paths through the knock out stages to the Champions League final
    by /u/sdbernard (DataIsBeautiful) on May 28, 2022 at 9:05 am

    submitted by /u/sdbernard [link] [comments]

  • Regression Models For Data Science In R - Pyoflife
    by /u/Ashwathama496 (DataIsBeautiful) on May 28, 2022 at 6:44 am

    submitted by /u/Ashwathama496 [link] [comments]

  • I need advice on building GitHub profile to build my personal brand on the job market
    by /u/jelkyanna (Data Science) on May 28, 2022 at 3:44 am

    I will graduate from college with a bachelor degree in Business Analytics this year and I decided to create a GitHub profile to showcase my school projects so that my employers can take a look. I never used GitHub before so I may sound like a rookie here, I am wondering if it is acceptable that I uploaded the raw R and Python codes to GitHub then I attached each code with a written PDF report and in some cases, a PowerPoint file? Furthermore, sometimes I built optimization models in Excel so I uploaded Excel files to GitHub too, but I make sure that for every repository I made, I will write the description of my project as well as attach screenshot of Excel and PowerPoint slides and any files that cannot be previewed directly on GitHub in the ReadMe section. I Was wondering if this is a good way to use GitHub? submitted by /u/jelkyanna [link] [comments]

  • Second programming language for Data Scientist / Data Engineer
    by /u/RP_m_13 (Data Science) on May 28, 2022 at 2:44 am

    What do you think are the most helpful second programming language to learn as Data Scientist or Data Engineer? I already know Python, SQL submitted by /u/RP_m_13 [link] [comments]

  • [OC] Monkeypox Epidemic Simulation for NCR - Philippines
    by /u/m00m1lk (DataIsBeautiful) on May 28, 2022 at 1:59 am

    submitted by /u/m00m1lk [link] [comments]

  • [OC] Articles from each Wikipedia (Source: Wikipedia, tool: Google Spreadsheets)
    by /u/Gagamer_39 (DataIsBeautiful) on May 28, 2022 at 1:48 am

    submitted by /u/Gagamer_39 [link] [comments]

  • Having trouble with CERNER data
    by /u/madmax766 (Data Science) on May 27, 2022 at 11:09 pm

    I am not sure if this is the right sub for this. I am not a data scientist, but I am currently working with one on a research project. We are having trouble using data from the CERNER EMR because it often will not include labs, vital signs, etc when we try to sort and organize large batches of data. Does anyone on here have any experience working with this data and have any tips? submitted by /u/madmax766 [link] [comments]

  • Career Swap: How do I break the glass ceiling?
    by /u/kiran_ms (Data Science) on May 27, 2022 at 11:07 pm

    I have a Bachelors in Mechanical Engineering, currently pursuing a Master's in Data-Driven Engineering Mechanics. My specialization required me to take courses like Machine Learning, Neural Networks, Big Data Analytics, Stochastic Analysis, and Exploratory Data Analysis through which I developed an intense liking for DS and have decided to make a career swap. I acquired a decent set of skills - TensorFlow, Pandas, SciPy, Scikit-Learn, Seaborn, Plotly, SQL. Did some cool projects related to Time Series Analysis, Unsupervised Learning, and CNN's, been practicing Leetcode just in case. I wanted a summer opportunity as a data analyst/scientist or biz analyst intern, but out of the 200+ places I applied, I only got rejection emails (not even an interview/coding round). I'm not exactly sure where I'm going wrong with my application process and kinda worried when it comes to securing full-time offers. Also, I have no previous DS/CS related work experience, it's all Mechanical.I'm concerned if a lack of internship experience would prove detrimental to the full-time application process. What could I do this summer (Leetcode/Kaggle/projects/study) so I could score a full-time offer by the end of this year? Also, if any of ya'll know of any internship openings for this summer, do let me know! (I'm based in Manhattan, NY) TIA! submitted by /u/kiran_ms [link] [comments]

  • Feature Types for ML - a Programmer's Perspective
    by /u/jpdowlin (Data Science) on May 27, 2022 at 9:53 pm

    submitted by /u/jpdowlin [link] [comments]

  • If I wanted to make a CNN with a float as output, how would I go about achieving that?
    by /u/jalex54202 (Data Science) on May 27, 2022 at 9:32 pm

    I'm attempting to find data about an image that results in a numerical output (for example, # of people in an image or # kg a person is given a full body image) Most convolutional neural network guides I see end up using keras and ImageDataGenerator.flow_from_directory, with the directory having multiple folders/category. Since my output is not really categorical I can't exactly do that, so I was wondering how I should set up my model in code. I'm completely new when it comes to convolutional neural networks, so I'd really appreciate it if you can give a pretty specific solution. submitted by /u/jalex54202 [link] [comments]

  • [OC] Tracking visually confirmed equipment losses and net changes of the Russian-Ukraine war [Updated!]
    by /u/Zahlii (DataIsBeautiful) on May 27, 2022 at 8:14 pm

    submitted by /u/Zahlii [link] [comments]

  • Best Data Lineage tool for enterprise
    by /u/Dquan97 (Data Science) on May 27, 2022 at 7:31 pm

    I've been using IBM Information Governance Catalog to create lineage across hundreds of assets and didn't like the experience of it. Anyone using Ataccama, Alation, or Atlan tools to create lineage? Also, I wonder if I can use 3rd party tools to interface with datastage jobs. submitted by /u/Dquan97 [link] [comments]

  • [OC] This list shows the deadliest school shootings in the USA. (Instagram: @geo.ranking) (Source: https://en.wikipedia.org/wiki/List_of_school_shootings_in_the_United_States_by_death_toll#cite_ref-:1_11-0)
    by /u/geo_ranking (DataIsBeautiful) on May 27, 2022 at 7:27 pm

    submitted by /u/geo_ranking [link] [comments]

  • Business case
    by /u/nobodycaresssss (Data Science) on May 27, 2022 at 7:08 pm

    Hi guys, I would need some help, I am struggling with a task and I can’t really find a solution. I have 2 tables table A: contains transactions from a mobile app, including a column with the URL of the app logo table B : contains the name of the mobile app and the URL of the logo in another column I should find a way to enrich the first table with the names of the mobile apps. The problem is that the URLs aren’t the same (even though the image is), so a join doesn’t lead me anywhere. Any ideas? submitted by /u/nobodycaresssss [link] [comments]

Big Data and Data Analytics 101 – Top 50 AWS Certified Data Analytics – Specialty Questions and Answers Dumps

AWS Certified Security – Specialty Questions and Answers Dumps

In this blog, we talk about big data and data analytics; we also give you the last updated top 50 AWS Certified Data Analytics – Specialty Questions and Answers Dumps

The AWS Certified Data Analytics – Specialty (DAS-C01) examination is intended for individuals who perform in a data analytics-focused role. This exam validates an examinee’s comprehensive understanding of using AWS services to design, build, secure, and maintain analytics solutions that provide insight from data.

The AWS Certified Data Analytics – Specialty (DAS-C01) covers the following domains:

Domain 1: Collection 18%

Domain 2: Storage and Data Management 22%

2022 AWS Cloud Practitioner Exam Preparation

Domain 3: Processing 24%

Domain 4: Analysis and Visualization 18%

Domain 5: Security 18%

data analytics specialty
data analytics specialty

Below are the Top 20 AWS Certified Data Analytics – Specialty Questions and Answers Dumps and References

Top 100 Data Science and Data Analytics Interview Questions and Answers

[appbox appstore 1604021741-iphone screenshots]
[appbox googleplay com.dataanalyticsexamprep.app]
[appbox microsoftstore 9NWSDDCMCF6X-mobile screenshots]
 

Question1: What combination of services do you need for the following requirements: accelerate petabyte-scale data transfers, load streaming data, and the ability to create scalable, private connections. Select the correct answer order.

A) Snowball, Kinesis Firehose, Direct Connect

B) Data Migration Services, Kinesis Firehose, Direct Connect

C) Snowball, Data Migration Services, Direct Connect


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

D) Snowball, Direct Connection, Kinesis Firehose


ANSWER1:

A

Notes/Hint1:

AWS has many options to help get data into the cloud, including secure devices like AWS Import/Export Snowball to accelerate petabyte-scale data transfers, Amazon Kinesis Firehose to load streaming data, and scalable private connections through AWS Direct Connect.

Reference1: Big Data Analytics Options 

 

ANSWER2:

C

Notes/Hint2:

Reference1: Relationalize PySpark


 

Question 3: There is a five-day car rally race across Europe. The race coordinators are using a Kinesis stream and IoT sensors to monitor the movement of the cars. Each car has a sensor and data is getting back to the stream with the default stream settings. On the last day of the rally, data is sent to S3. When you go to interpret the data in S3, there is only data for the last day and nothing for the first 4 days. Which of the following is the most probable cause of this?

A) You did not have versioning enabled and would need to create individual buckets to prevent the data from being overwritten.

B) Data records are only accessible for a default of 24 hours from the time they are added to a stream.

C) One of the sensors failed, so there was no data to record.

D) You needed to use EMR to send the data to S3; Kinesis Streams are only compatible with DynamoDB.

ANSWER3:

B

Notes/Hint3: 

Streams support changes to the data record retention period of your stream. An Amazon Kinesis stream is an ordered sequence of data records, meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The period from when a record is added to when it is no longer accessible is called the retention period. An Amazon Kinesis stream stores records for 24 hours by default, up to 168 hours.

Reference3: Kinesis Extended Reading

 

 

Question 4:  A publisher website captures user activity and sends clickstream data to Amazon Kinesis Data Streams. The publisher wants to design a cost-effective solution to process the data to create a timeline of user activity within a session. The solution must be able to scale depending on the number of active sessions.
Which solution meets these requirements?

A) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on Amazon EC2 instances in an EC2 Auto Scaling group.

B) Include a variable in the clickstream to maintain a counter for each user action during their session. Use the action type as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the
counter. Deploy the consumer application on AWS Lambda.

C) Include a session identifier in the clickstream data from the publisher website and use as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Deploy the consumer application on Amazon EC2 instances in an
EC2 Auto Scaling group. Use an AWS Lambda function to reshard the stream based upon Amazon CloudWatch alarms.

D) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on AWS Lambda.

ANSWER4:

C

Notes/Hint4: 

Partitioning by the session ID will allow a single processor to process all the actions for a user session in order. An AWS Lambda function can call the UpdateShardCount API action to change the number of shards in the stream. The KCL will automatically manage the number of processors to match the number of shards. Amazon EC2 Auto Scaling will assure the correct number of instances are running to meet the processing load.

Reference4: UpdateShardCount API

 

Question 5: Your company has two batch processing applications that consume financial data about the day’s stock transactions. Each transaction needs to be stored durably and guarantee that a record of each application is delivered so the audit and billing batch processing applications can process the data. However, the two applications run separately and several hours apart and need access to the same transaction information. After reviewing the transaction information for the day, the information no longer needs to be stored. What is the best way to architect this application?

A) Use SQS for storing the transaction messages; when the billing batch process performs first and consumes the message, write the code in a way that does not remove the message after consumed, so it is available for the audit application several hours later. The audit application can consume the SQS message and remove it from the queue when completed.

B)  Use Kinesis to store the transaction information. The billing application will consume data from the stream and the audit application can consume the same data several hours later.

C) Store the transaction information in a DynamoDB table. The billing application can read the rows while the audit application will read the rows then remove the data.

D) Use SQS for storing the transaction messages. When the billing batch process consumes each message, have the application create an identical message and place it in a different SQS for the audit application to use several hours later.

SQS would make this more difficult because the data does not need to persist after a full day.

ANSWER5:

B

Notes/Hint5: 

Kinesis appears to be the best solution that allows multiple consumers to easily interact with the records.

Reference5: Amazon Kinesis

Get mobile friendly version of the quiz @ the App Store

[appbox appstore 1604021741-iphone screenshots]
[appbox googleplay com.dataanalyticsexamprep.app]
[appbox microsoftstore 9NWSDDCMCF6X-mobile screenshots]

Question 6: A company is currently using Amazon DynamoDB as the database for a user support application. The company is developing a new version of the application that will store a PDF file for each support case ranging in size from 1–10 MB. The file should be retrievable whenever the case is accessed in the application.
How can the company store the file in the MOST cost-effective manner?

A) Store the file in Amazon DocumentDB and the document ID as an attribute in the DynamoDB table.

B) Store the file in Amazon S3 and the object key as an attribute in the DynamoDB table.

C) Split the file into smaller parts and store the parts as multiple items in a separate DynamoDB table.

D) Store the file as an attribute in the DynamoDB table using Base64 encoding.

ANSWER6:

B

Notes/Hint6: 

Use Amazon S3 to store large attribute values that cannot fit in an Amazon DynamoDB item. Store each file as an object in Amazon S3 and then store the object path in the DynamoDB item.


Reference6: S3 Storage Cost –  DynamODB Storage Cost

[appbox googleplay com.dataanalytics.app]
[appbox microsoftstore 9NWSDDCMCF6X-mobile screenshots]
 

Question 7: Your client has a web app that emits multiple events to Amazon Kinesis Streams for reporting purposes. Critical events need to be immediately captured before processing can continue, but informational events do not need to delay processing. What solution should your client use to record these types of events without unnecessarily slowing the application?

A) Log all events using the Kinesis Producer Library.

B) Log critical events using the Kinesis Producer Library, and log informational events using the PutRecords API method.

C) Log critical events using the PutRecords API method, and log informational events using the Kinesis Producer Library.

D) Log all events using the PutRecords API method.

ANSWER2:

C

Notes/Hint7: 

The PutRecords API can be used in code to be synchronous; it will wait for the API request to complete before the application continues. This means you can use it when you need to wait for the critical events to finish logging before continuing. The Kinesis Producer Library is asynchronous and can send many messages without needing to slow down your application. This makes the KPL ideal for the sending of many non-critical alerts asynchronously.

Reference7: PutRecords API

 

Question 8: You work for a start-up that tracks commercial delivery trucks via GPS. You receive coordinates that are transmitted from each delivery truck once every 6 seconds. You need to process these coordinates in near real-time from multiple sources and load them into Elasticsearch without significant technical overhead to maintain. Which tool should you use to digest the data?

A) Amazon SQS

B) Amazon EMR

C) AWS Data Pipeline

D) Amazon Kinesis Firehose

ANSWER8:

D

Notes/Hint8: 

Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards.

Reference8: Amazon Kinesis Firehose

 

Question 9: A company needs to implement a near-real-time fraud prevention feature for its ecommerce site. User and order details need to be delivered to an Amazon SageMaker endpoint to flag suspected fraud. The amount of input data needed for the inference could be as much as 1.5 MB.
Which solution meets the requirements with the LOWEST overall latency?

A) Create an Amazon Managed Streaming for Kafka cluster and ingest the data for each order into a topic. Use a Kafka consumer running on Amazon EC2 instances to read these messages and invoke the Amazon SageMaker endpoint.

B) Create an Amazon Kinesis Data Streams stream and ingest the data for each order into the stream. Create an AWS Lambda function to read these messages and invoke the Amazon SageMaker endpoint.

C) Create an Amazon Kinesis Data Firehose delivery stream and ingest the data for each order into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Trigger an AWS Lambda function with an S3 event notification to read the data and invoke the Amazon SageMaker endpoint.

D) Create an Amazon SNS topic and publish the data for each order to the topic. Subscribe the Amazon SageMaker endpoint to the SNS topic.


ANSWER9:

A

Notes/Hint9: 

An Amazon Managed Streaming for Kafka cluster can be used to deliver the messages with very low latency. It has a configurable message size that can handle the 1.5 MB payload.

Reference9: Amazon Managed Streaming for Kafka cluster

 

Question 10: You need to filter and transform incoming messages coming from a smart sensor you have connected with AWS. Once messages are received, you need to store them as time series data in DynamoDB. Which AWS service can you use?

A) IoT Device Shadow Service

B) Redshift

C) Kinesis

D) IoT Rules Engine

ANSWER10:

D

Notes/Hint10: 

The IoT rules engine will allow you to send sensor data over to AWS services like DynamoDB

Reference10: The IoT rules engine

Get mobile friendly version of the quiz @ the App Store

Question 11: A media company is migrating its on-premises legacy Hadoop cluster with its associated data processing scripts and workflow to an Amazon EMR environment running the latest Hadoop release. The developers want to reuse the Java code that was written for data processing jobs for the on-premises cluster.
Which approach meets these requirements?

A) Deploy the existing Oracle Java Archive as a custom bootstrap action and run the job on the EMR cluster.

B) Compile the Java program for the desired Hadoop version and run it using a CUSTOM_JAR step on the EMR cluster.

C) Submit the Java program as an Apache Hive or Apache Spark step for the EMR cluster.

D) Use SSH to connect the master node of the EMR cluster and submit the Java program using the AWS CLI.


ANSWER11:

B

Notes/Hint11: 

A CUSTOM JAR step can be configured to download a JAR file from an Amazon S3 bucket and execute it. Since the Hadoop versions are different, the Java application has to be recompiled.

Reference11:  Automating analytics workflows on EMR

Question 12: You currently have databases running on-site and in another data center off-site. What service allows you to consolidate to one database in Amazon?

A) AWS Kinesis

B) AWS Database Migration Service

C) AWS Data Pipeline

D) AWS RDS Aurora

ANSWER12:

B

Notes/Hint12: 

AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database.

Reference12: DMS

 

 

Question 13:  An online retail company wants to perform analytics on data in large Amazon S3 objects using Amazon EMR. An Apache Spark job repeatedly queries the same data to populate an analytics dashboard. The analytics team wants to minimize the time to load the data and create the dashboard.
Which approaches could improve the performance? (Select TWO.)
A) Copy the source data into Amazon Redshift and rewrite the Apache Spark code to create analytical reports by querying Amazon Redshift.

B) Copy the source data from Amazon S3 into Hadoop Distributed File System (HDFS) using s3distcp.

C) Load the data into Spark DataFrames.

D) Stream the data into Amazon Kinesis and use the Kinesis Connector Library (KCL) in multiple Spark jobs to perform analytical jobs.

E) Use Amazon S3 Select to retrieve the data necessary for the dashboards from the S3 objects.

ANSWER13:

C and E

Notes/Hint13: 

One of the speed advantages of Apache Spark comes from loading data into immutable dataframes, which can be accessed repeatedly in memory. Spark DataFrames organizes distributed data into columns. This makes summaries and aggregates much quicker to calculate. Also, instead of loading an entire large Amazon S3 object, load only what is needed using Amazon S3 Select. Keeping the data in Amazon S3 avoids loading the large dataset into HDFS.

Reference13: Spark DataFrames 

 

Question 14: You have been hired as a consultant to provide a solution to integrate a client’s on-premises data center to AWS. The customer requires a 300 Mbps dedicated, private connection to their VPC. Which AWS tool do you need?

A) VPC peering

B) Data Pipeline

C) Direct Connect

D) EMR

ANSWER14:

C

Notes/Hint14: 

Direct Connect will provide a dedicated and private connection to an AWS VPC.

Reference14: Direct Connect

 

Question 15: Your organization has a variety of different services deployed on EC2 and needs to efficiently send application logs over to a central system for processing and analysis. They’ve determined it is best to use a managed AWS service to transfer their data from the EC2 instances into Amazon S3 and they’ve decided to use a solution that will do what?

A) Installs the AWS Direct Connect client on all EC2 instances and uses it to stream the data directly to S3.

B) Leverages the Kinesis Agent to send data to Kinesis Data Streams and output that data in S3.

C) Ingests the data directly from S3 by configuring regular Amazon Snowball transactions.

D) Leverages the Kinesis Agent to send data to Kinesis Firehose and output that data in S3.

ANSWER15:

D

Notes/Hint15: 

Kinesis Firehose is a managed solution, and log files can be sent from EC2 to Firehose to S3 using the Kinesis agent.

Reference15: Kinesis Firehose

 

Question 16: A data engineer needs to create a dashboard to display social media trends during the last hour of a large company event. The dashboard needs to display the associated metrics with a latency of less than 1 minute.
Which solution meets these requirements?

A) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Use Kinesis Data Analytics for SQL Applications to perform a sliding window analysis to compute the metrics and output the results to a Kinesis Data Streams data stream. Configure an AWS Lambda function to save the stream data to an Amazon DynamoDB table. Deploy a real-time dashboard hosted in an Amazon S3 bucket to read and display the metrics data stored in the DynamoDB table.

B) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Elasticsearch Service cluster with a buffer interval of 0 seconds. Use Kibana to perform the analysis and display the results.

C) Publish the raw social media data to an Amazon Kinesis Data Streams data stream. Configure an AWS Lambda function to compute the metrics on the stream data and save the results in an Amazon S3 bucket. Configure a dashboard in Amazon QuickSight to query the data using Amazon Athena and display the results.

D) Publish the raw social media data to an Amazon SNS topic. Subscribe an Amazon SQS queue to the topic. Configure Amazon EC2 instances as workers to poll the queue, compute the metrics, and save the results to an Amazon Aurora MySQL database. Configure a dashboard in Amazon QuickSight to query the data in Aurora and display the results.


ANSWER16:

A

Notes/Hint16: 

Amazon Kinesis Data Analytics can query data in a Kinesis Data Firehose delivery stream in near-real time using SQL. A sliding window analysis is appropriate for determining trends in the stream. Amazon S3 can host a static webpage that includes JavaScript that reads the data in Amazon DynamoDB and refreshes the dashboard.

Reference16: Amazon Kinesis Data Analytics can query data in a Kinesis Data Firehose delivery stream in near-real time using SQL

 

Question 17: A real estate company is receiving new property listing data from its agents through .csv files every day and storing these files in Amazon S3. The data analytics team created an Amazon QuickSight visualization report that uses a dataset imported from the S3 files. The data analytics team wants the visualization report to reflect the current data up to the previous day. How can a data analyst meet these requirements?

A) Schedule an AWS Lambda function to drop and re-create the dataset daily.

B) Configure the visualization to query the data in Amazon S3 directly without loading the data into SPICE.

C) Schedule the dataset to refresh daily.

D) Close and open the Amazon QuickSight visualization.

ANSWER17:

B

Notes/Hint17:

Datasets created using Amazon S3 as the data source are automatically imported into SPICE. The Amazon QuickSight console allows for the refresh of SPICE data on a schedule.

Reference17: Amazon QuickSight and SPICE

 

Question 18: You need to migrate data to AWS. It is estimated that the data transfer will take over a month via the current AWS Direct Connect connection your company has set up. Which AWS tool should you use?

A) Establish additional Direct Connect connections.

B) Use Data Pipeline to migrate the data in bulk to S3.

C) Use Kinesis Firehose to stream all new and existing data into S3.

D) Snowball

ANSWER18:

D

Notes/Hint18:

As a general rule, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consider using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transferring your data and need to transfer 100 TB of data, it takes more than 100 days to complete a data transfer over that connection. You can make the same transfer by using multiple Snowballs in about a week.

Reference18: Snowball

 

Question 19: You currently have an on-premises Oracle database and have decided to leverage AWS and use Aurora. You need to do this as quickly as possible. How do you achieve this?

A) It is not possible to migrate an on-premises database to AWS at this time.

B) Use AWS Data Pipeline to create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switchover of your production environment to the new database once the target database is caught up with the source database.

C) Use AWS Database Migration Services and create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switch-over of your production environment to the new database once the target database is caught up with the source database.

D) Use AWS Glue to crawl the on-premises database schemas and then migrate them into AWS with Data Pipeline jobs.

https://aws.amazon.com/dms/faqs/

ANSWER2:

C

Notes/Hint19: 

DMS can efficiently support this sort of migration using the steps outlined. While AWS Glue can help you crawl schemas and store metadata on them inside of Glue for later use, it isn’t the best tool for actually transitioning a database over to AWS itself. Similarly, while Data Pipeline is great for ETL and ELT jobs, it isn’t the best option to migrate a database over to AWS.

Reference19: DMS

 

Question 20: A financial company uses Amazon EMR for its analytics workloads. During the company’s annual security audit, the security team determined that none of the EMR clusters’ root volumes are encrypted. The security team recommends the company encrypt its EMR clusters’ root volume as soon as possible.
Which solution would meet these requirements?

A) Enable at-rest encryption for EMR File System (EMRFS) data in Amazon S3 in a security configuration. Re-create the cluster using the newly created security configuration.

B) Specify local disk encryption in a security configuration. Re-create the cluster using the newly created security configuration.

C) Detach the Amazon EBS volumes from the master node. Encrypt the EBS volume and attach it back to the master node.

D) Re-create the EMR cluster with LZO encryption enabled on all volumes.

ANSWER20:

B

Notes/Hint20: 

Local disk encryption can be enabled as part of a security configuration to encrypt root and storage volumes.

Reference20: EMR Cluster Local disk encryption

Question 21: A company has a clickstream analytics solution using Amazon Elasticsearch Service. The solution ingests 2 TB of data from Amazon Kinesis Data Firehose and stores the latest data collected within 24 hours in an Amazon ES cluster. The cluster is running on a single index that has 12 data nodes and 3 dedicated master nodes. The cluster is configured with 3,000 shards and each node has 3 TB of EBS storage attached. The Data Analyst noticed that the query performance of Elasticsearch is sluggish, and some intermittent errors are produced by the Kinesis Data Firehose when it tries to write to the index. Upon further investigation, there were occasional JVMMemoryPressure errors found in Amazon ES logs.

What should be done to improve the performance of the Amazon Elasticsearch Service cluster?

A) Improve the cluster performance by increasing the number of master nodes of Amazon Elasticsearch.
 
B) Improve the cluster performance by increasing the number of shards of the Amazon Elasticsearch index.
       
C) Improve the cluster performance by decreasing the number of data nodes of Amazon Elasticsearch.
 
D) Improve the cluster performance by decreasing the number of shards of the Amazon Elasticsearch index.
 
ANSWER21:
D
 
Notes/Hint21:
“Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. With Amazon ES, you get direct access to the Elasticsearch APIs; existing code and applications work seamlessly with the service.
 
Each Elasticsearch index is split into some number of shards. You should decide the shard count before indexing your first document. The overarching goal of choosing a number of shards is to distribute an index evenly across all data nodes in the cluster. However, these shards shouldn’t be too large or too numerous.
 
A good rule of thumb is to try to keep a shard size between 10 – 50 GiB. Large shards can make it difficult for Elasticsearch to recover from failure, but because each shard uses some amount of CPU and memory, having too many small shards can cause performance issues and out of memory errors. In other words, shards should be small enough that the underlying Amazon ES instance can handle them, but not so small that they place needless strain on the hardware. Therefore the correct answer is: Improve the cluster performance by decreasing the number of shards of Amazon Elasticsearch index.
 
Reference:  ElasticsSearch
 

Djamga Data Sciences Big Data – Data Analytics Youtube Playlist

2- Prepare for Your AWS Certification Exam

3- LinuxAcademy

Big Data – Data Analytics Jobs:

 

Big Data – Data Analytics – Data Sciences Latest News:

DATA ANALYTICS Q&A:

 
 

[/bg_collapse]

Clever Questions, Answers, Resources about:

  • Data Sciences
  • Big Data
  • Data Analytics
  • Data Sciences
  • Databases
  • Data Streams
  • Large DataSets

What Is a Data Scientist?

Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician. – Josh Wills

Data scientists apply sophisticated quantitative and computer science skills to both structure and analyze massive stores or continuous streams of unstructured data, with the intent to derive insights and prescribe action. – Burtch Works Data Science Salary Survey, May 2018

More than anything, what data scientists do is make discoveries while swimming in data… In a competitive landscape where challenges keep changing and data never stop flowing, data scientists help decision makers shift from ad hoc analysis to an ongoing conversation with data. – Data Scientist: The Sexiest Job of the 21st Century, Harvard Business Review

Do All Data Scientists Hold Graduate Degrees?

Data scientists are highly educated. With exceedingly rare exception, every data scientist holds at least an undergraduate degree. 91% of data scientists in 2018 held advanced degrees. The remaining 9% all held undergraduate degrees. Furthermore,

  • 25% of data scientists hold a degree in statistics or mathematics,
  • 20% have a computer science degree,
  • an additional 20% hold a degree in the natural sciences, and
  • 18% hold an engineering degree.

The remaining 17% of surveyed data scientists held degrees in business, social science, or economics.

How Are Data Scientists Different From Data Analysts?

Broadly speaking, the roles differ in scope: data analysts build reports with narrow, well-defined KPIs. Data scientists often to work on broader business problems without clear solutions. Data scientists live on the edge of the known and unknown.

We’ll leave you with a concrete example: A data analyst cares about profit margins. A data scientist at the same company cares about market share.

How Is Data Science Used in Medicine?

Data science in healthcare best translates to biostatistics. It can be quite different from data science in other industries as it usually focuses on small samples with several confounding variables.

How Is Data Science Used in Manufacturing?

Data science in manufacturing is vast; it includes everything from supply chain optimization to the assembly line.

What are data scientists paid?

Most people are attracted to data science for the salary. It’s true that data scientists garner high salaries compares to their peers. There is data to support this: The May 2018 edition of the BurtchWorks Data Science Salary Survey, annual salary statistics were

Note the above numbers do not reflect total compensation which often includes standard benefits and may include company ownership at high levels.

How will data science evolve in the next 5 years?

Will AI replace data scientists?

What is the workday like for a data scientist?

It’s common for data scientists across the US to work 40 hours weekly. While company culture does dictate different levels of work life balance, it’s rare to see data scientists who work more than they want. That’s the virtue of being an expensive resource in a competitive job market.

How do I become a Data Scientist?

The roadmap given to aspiring data scientists can be boiled down to three steps:

  1. Earning an undergraduate and/or advanced degree in computer science, statistics, or mathematics,
  2. Building their portfolio of SQL, Python, and R skills, and
  3. Getting related work experience through technical internships.

All three require a significant time and financial commitment.

There used to be a saying around datascience: The road into a data science starts with two years of university-level math.

What Should I Learn? What Order Do I Learn Them?

This answer assumes your academic background ends with a HS diploma in the US.

  1. Python
  2. Differential Calculus
  3. Integral Calculus
  4. Multivariable Calculus
  5. Linear Algebra
  6. Probability
  7. Statistics

Some follow up questions and answers:

Why Python first?

  • Python is a general purpose language. R is used primarily by statisticians. In the likely scenario that you decide data science requires too much time, effort, and money, Python will be more valuable than your R skills. It’s preparing you to fail, sure, but in the same way a savings account is preparing you to fail.

When do I start working with data?

  • You’ll start working with data when you’ve learned enough Python to do so. Whether you’ll have the tools to have any fun is a much more open-ended question.

How long will this take me?

  • Assuming self-study and average intelligence, 3-5 years from start to finish.

How Do I Learn Python?

If you don’t know the first thing about programming, start with MIT’s course in the curated list.

These modules are the standard tools for data analysis in Python:

Curated Threads & Resources

  1. MIT’s Introduction to Computer Science and Programming in Python A free, archived course taught at MIT in the fall 2016 semester.
  2. Data Scientist with Python Career Track | DataCamp The first courses are free, but unlimited access costs $29/month. Users usually report a positive experience, and it’s one of the better hands-on ways to learn Python.
  3. Sentdex’s (Harrison Kinsley) Youtube Channel Related to Python Programming Tutorials
  4. /r/learnpython is an active sub and very useful for learning the basics.

How Do I Learn R?

If you don’t know the first thing about programming, start with R for Data Science in the curated list.

These modules are the standard tools for data analysis in Python:

Curated Threads & Resources

  1. R for Data Science by Hadley WickhamA free ebook full of succinct code examples. Terrific for learning tidyverse syntax.Folks with some math background may prefer the free alternative, Introduction to Statistical Learning.
  2. Data Scientist with R Career Track | DataCamp The first courses are free, but unlimited access costs $29/month. Users usually report a positive experience, and it’s one of the few hands-on ways to learn R.
  3. R Inferno Learners with a CS background will appreciate this free handbook explaining how and why R behaves the way that it does.

How Do I Learn SQL?

Prioritize the basics of SQL. i.e. when to use functions like POW, SUM, RANK; the computational complexity of the different kinds of joins.

Concepts like relational algebra, when to use clustered/non-clustered indexes, etc. are useful, but (almost) never come up in interviews.

You absolutely do not need to understand administrative concepts like managing permissions.

Finally, there are numerous query engines and therefore numerous dialects of SQL. Use whichever dialect is supported in your chosen resource. There’s not much difference between them, so it’s easy to learn another dialect after you’ve learned one.

Curated Threads & Resources

  1. The SQL Tutorial for Data Analysis | Mode.com
  2. Introduction to Databases A Free MOOC supported by Stanford University.
  3. SQL Queries for Mere MortalsA $30 book highly recommended by /u/karmanujan

How Do I Learn Calculus?

Fortunately (or unfortunately), calculus is the lament of many students, and so resources for it are plentiful. Khan Academy mimics lectures very well, and Paul’s Online Math Notes are a terrific reference full of practice problems and solutions.

Calculus, however, is not just calculus. For those unfamiliar with US terminology,

  • Calculus I is differential calculus.
  • Calculus II is integral calculus.
  • Calculus III is multivariable calculus.
  • Calculus IV is differential equations.

Differential and integral calculus are both necessary for probability and statistics, and should be completed first.

Multivariable calculus can be paired with linear algebra, but is also required.

Differential equations is where consensus falls apart. The short it is, they’re all but necessary for mathematical modeling, but not everyone does mathematical modeling. It’s another tool in the toolbox.

Curated Threads & Resources

How Do I Learn Probability?

Probability is not friendly to beginners. Definitions are rooted in higher mathematics, notation varies from source to source, and solutions are frequently unintuitive. Probability may present the biggest barrier to entry in data science.

It’s best to pick a single primary source and a community for help. If you can spend the money, register for a university or community college course and attend in person.

The best free resource is MIT’s 18.05 Introduction to Probability and Statistics (Spring 2014). Leverage /r/learnmath, /r/learnmachinelearning, and /r/AskStatistics when you get inevitably stuck.

How Do I Learn Linear Algebra?

Curated Threads & Resources https://www.youtube.com/watch?v=fNk_zzaMoSs&index=1&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

What does the typical data science interview process look like?

For general advice, Mastering the DS Interview Loop is a terrific article. The community discussed the article here.

Briefly summarized, most companies follow a five stage process:

  1. Coding Challenge: Most common at software companies and roles contributing to a digital product.
  2. HR Screen
  3. Technical Screen: Often in the form of a project. Less frequently, it takes the form of a whiteboarding session at the onsite.
  4. Onsite: Usually the project from the technical screen is presented here, followed by a meeting with the director overseeing the team you’ll join.
  5. Negotiation & Offer

Preparation:

  1. Practice questions on Leetcode which has both SQL and traditional data structures/algorithm questions
  2. Review Brilliant for math and statistics questions.
  3. SQL Zoo and Mode Analytics both offer various SQL exercises you can solve in your browser.

Tips:

  1. Before you start coding, read through all the questions. This allows your unconscious mind to start working on problems in the background.
  2. Start with the hardest problem first, when you hit a snag, move to the simpler problem before returning to the harder one.
  3. Focus on passing all the test cases first, then worry about improving complexity and readability.
  4. If you’re done and have a few minutes left, go get a drink and try to clear your head. Read through your solutions one last time, then submit.
  5. It’s okay to not finish a coding challenge. Sometimes companies will create unreasonably tedious coding challenges with one-week time limits that require 5–10 hours to complete. Unless you’re desperate, you can always walk away and spend your time preparing for the next interview.

Remember, interviewing is a skill that can be learned, just like anything else. Hopefully, this article has given you some insight on what to expect in a data science interview loop.

The process also isn’t perfect and there will be times that you fail to impress an interviewer because you don’t possess some obscure piece of knowledge. However, with repeated persistence and adequate preparation, you’ll be able to land a data science job in no time!

What does the Airbnb data science interview process look like? [Coming soon]

What does the Facebook data science interview process look like? [Coming soon]

What does the Uber data science interview process look like? [Coming soon]

What does the Microsoft data science interview process look like? [Coming soon]

What does the Google data science interview process look like? [Coming soon]

What does the Netflix data science interview process look like? [Coming soon]

What does the Apple data science interview process look like? [Coming soon]

Question: How is SQL used in real data science jobs?

Real life enterprise databases are orders of magnitude more complex than the “customers, products, orders” examples used as teaching tools. SQL as a language is actually, IMO, a relatively simple language (the db administration component can get complex, but mostly data scientists aren’t doing that anyways). SQL is an incredibly important skill though for any DS role. I think when people emphasize SQL, what they really are talking about is the ability to write queries that interrogate the data and discover the nuances behind how it is collected and/or manipulated by an application before it is written to the dB. For example, is the employee’s phone number their current phone number or does the database store a history of all previous phone numbers? Critically important questions for understanding the nature of your data, and it doesn’t necessarily deal with statistics! The level of syntax required to do this is not that sophisticated, you can get pretty damn far with knowledge of all the joins, group by/analytical functions, filtering and nesting queries. In many cases, the data is too large to just select * and dump into a csv to load into pandas, so you start with SQL against the source. In my mind it’s more important for “SQL skills” to know how to generate hypotheses (that will build up to answering your business question) that can be investigated via a query than it is to be a master of SQL’s syntax. Just my two cents though!

12000 Years of Human Population Dynamic

[OC] 12,000 years of human population dynamics from dataisbeautiful

Human population density estimates based on the Hyde 3.2 model.

Countries with the Most Nuclear Warheads

[OC] Countries with the Most Nuclear Warheads from dataisbeautiful

Data Source: Here

[appbox appstore 1604021741-iphone screenshots]
[appbox googleplay com.dataanalyticsexamprep.app]
[appbox microsoftstore 9NWSDDCMCF6X-mobile screenshots]

Capitol insurrection arrests per million people by state

[OC] Capitol insurrection arrests per million people by state from dataisbeautiful

Data Source: Made in Google Sheets using data from this USA Today article (for the number of arrests by arrestee’s home state) and this spreadsheet of the results of the 2020 Census (for the population of each state and DC in 2020, which was used as the denominator in calculating arrests/million people).