AWS Azure Google Cloud Certifications Testimonials and Dumps

Register to AI Driven Cloud Cert Prep Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

AWS Azure Google Cloud Certifications Testimonials and Dumps

Do you want to become a Professional DevOps Engineer, a cloud Solutions Architect, a Cloud Engineer or a modern Developer or IT Professional, a versatile Product Manager, a hip Project Manager? Therefore Cloud skills and certifications can be just the thing you need to make the move into cloud or to level up and advance your career.

85% of hiring managers say cloud certifications make a candidate more attractive.

Build the skills that’ll drive your career into six figures.

In this blog, we are going to feed you with AWS Azure and GCP Cloud Certification testimonials and Frequently Asked Questions and Answers Dumps.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

https://apps.apple.com/ca/app/djamgatech-pro/id1574297762
AWS Azure Google Cloud Certifications Testimonials and Dumps
AWS Developer Associates DVA-C01 PRO
 

PASSED AWS CCP (2022)

AWS Cloud Practitioner CCP CLF-C01 Certification Exam Prep

Went through the entire CloudAcademy course. Most of the info went out the other ear. Got a 67% on their final exam. Took the ExamPro free exam, got 69%.

Was going to take it last Saturday, but I bought TutorialDojo’s exams on Udemy. Did one Friday night, got a 50% and rescheduled it a week later to today Sunday.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Took 4 total TD exams. Got a 50%, 54%, 67%, and 64%. Even up until last night I hated the TD exams with a passion, I thought they were covering way too much stuff that didn’t even pop up in study guides I read. Their wording for some problems were also atrocious. But looking back, the bulk of my “studying” was going through their pretty well written explanations, and their links to the white papers allowed me to know what and where to read.

Not sure what score I got yet on the exam. As someone who always hated testing, I’m pretty proud of myself. I also had to take a dump really bad starting at around question 25. Thanks to TutorialsDojo Jon Bonso for completely destroying my confidence before the exam, forcing me to up my game. It’s better to walk in way over prepared than underprepared.

Just Passed My CCP exam today (within 2 weeks)

I would like to thank this community for recommendations about exam preparation. It was wayyyy easier than I expected (also way easier than TD practice exams scenario-based questions-a lot less wordy on real exam). I felt so unready before the exam that I rescheduled the exam twice. Quick tip: if you have limited time to prepare for this exam, I would recommend scheduling the exam beforehand so that you don’t procrastinate fully.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Resources:

-Stephane’s course on Udemy (I have seen people saying to skip hands-on videos but I found them extremely helpful to understand most of the concepts-so try to not skip those hands-on)

-Tutorials Dojo practice exams (I did only 3.5 practice tests out of 5 and already got 8-10 EXACTLY worded questions on my real exam)

Previous Aws knowledge:

-Very little to no experience (deployed my group’s app to cloud via Elastic beanstalk in college-had 0 clue at the time about what I was doing-had clear guidelines)

Preparation duration: -2 weeks (honestly watched videos for 12 days and then went over summary and practice tests on the last two days)

Links to resources:

https://www.udemy.com/course/aws-certified-cloud-practitioner-new/

https://tutorialsdojo.com/courses/aws-certified-cloud-practitioner-practice-exams/

I used Stephane Maarek on Udemy. Purchased his course and the 6 Practice Exams. Also got Neal Davis’ 500 practice questions on Udemy. I took Stephane’s class over 2 days, then spent the next 2 weeks going over the tests (3~4 per day) till I was constantly getting over 80% – passed my exam with a 882.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Passed – CCP CLF-C01

 

What an adventure, I’ve never really gieven though to getting a cert until one day it just dawned on me that it’s one of the few resources that are globally accepted. So you can approach any company and basically prove you know what’s up on AWS 😀

Passed with two weeks of prep (after work and weekends)

Resources Used:

  • https://www.exampro.co/

    • This was just a nice structured presentation that also gives you the powerpoint slides plus cheatsheets and a nice overview of what is said in each video lecture.

  • Udemy – AWS Certified Cloud Practitioner Practice Exams, created by Jon Bonso**, Tutorials Dojo**

    • These are some good prep exams, they ask the questions in a way that actually make you think about the related AWS Service. With only a few “Bullshit! That was asked in a confusing way” questions that popped up.

Pass AWS CCP. The score is beyond expected

I took CCP 2 days ago and got the pass notification right after submitting the answers. In about the next 3 hours I got an email from Credly for the badge. This morning I got an official email from AWS congratulating me on passing, the score is much higher than I expected. I took Stephane Maarek’s CCP course and his 6 demo exams, then Neal Davis’ 500 questions also. On all the demo exams, I took 1 fail and all passes with about 700-800. But in the real exam, I got 860. The questions in the real exam are kind of less verbose IMO, but I don’t truly agree with some people I see on this sub saying that they are easier.
Just a little bit of sharing, now I’ll find something to continue ^^

Good luck with your own exams.

Passed the exam! Spent 25 minutes answering all the questions. Another 10 to review. I might come back and update this post with my actual score.

Background

– A year of experience working with AWS (e.g., EC2, Elastic Beanstalk, Route 53, and Amplify).

– Cloud development on AWS is not my strong suit. I just Google everything, so my knowledge is very spotty. Less so now since I studied for this exam.

Study stats

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

– Spent three weeks studying for the exam.

– Studied an hour to two every day.

– Solved 800-1000 practice questions.

– Took 450 screenshots of practice questions and technology/service descriptions as reference notes to quickly swift through on my phone and computer for review. Screenshots were of questions that I either didn’t know, knew but was iffy on, or those I believed I’d easily forget.

– Made 15-20 pages of notes. Chill. Nothing crazy. This is on A4 paper. Free-form note taking. With big diagrams. Around 60-80 words per page.

– I was getting low-to-mid 70%s on Neal Davis’s and Stephane Maarek’s practice exams. Highest score I got was an 80%.

– I got a 67(?)% on one of Stephane Maarek’s exams. The only sub-70% I ever got on any practice test. I got slightly anxious. But given how much harder Maarek’s exams are compared to the actual exam, the anxiety was undue.

– Finishing the practice exams on time was never a problem for me. I would finish all of them comfortably within 35 minutes.

Resources used

– AWS Cloud Practitioner Essentials on the AWS Training and Certification Portal

– AWS Certified Cloud Practitioner Practice Tests (Book) by Neal Davis

– 6 Practice Exams | AWS Certified Cloud Practitioner CLF-C01 by Stephane Maarek*

– Certified Cloud Practitioner Course by Exam Pro (Paid Version)**

– One or two free practice exams found by a quick Google search

*Regarding Exam Pro: I went through about 40% of the video lectures. I went through all the videos in the first few sections but felt that watching the lectures was too slow and laborious even at 1.5-2x speed. (The creator, for the most part, reads off of the slides, adding brief comments here and there.) So, I decided to only watch the video lectures for sections I didn’t have a good grasp on. (I believe the video lectures provided in the course are just split versions of the full length course available for free on YouTube under the freeCodeCamp channel, here.) The online course provides five practice exams. I did not take any of them.

**Regarding Stephane Maarek: I only took his practice exams. I did not take his study guide course.

Notes

– My study regimen (i.e., an hour to two every day for three weeks) was overkill.

– The questions on the practice exams created by Neal Davis and Stephane Maarek were significantly harder than those on the actual exam. I believe I could’ve passed without touching any of these resources.

– I retook one or two practice exams out of the 10+ I’ve taken. I don’t think there’s a need to retake the exams as long as you are diligent about studying the questions and underlying concepts you got wrong. I reviewed all the questions I missed on every practice exam the day before.

What would I do differently?

– Focus on practice tests only. No video lectures.

– Focus on the technologies domain. You can intuit your way through questions in the other domains.

– Chill

What are the Top 100 AWS jobs you can get with an AWS certification in 2022 plus AWS Interview Questions
AWS SAA-C02 SAA-C03 Exam Prep

Just passed SAA-C03, thoughts on it

 
  • Lots of the comments here about networking / VPC questions being prevalent are true. Also so many damn Aurora questions, it was like a presales chat.

  • The questions are actually quite detailed; as some had already mentioned. So pay close attention to the minute details Some questions you definitely have to flag for re-review.

  • It is by far harder than the Developer Associate exam, despite it having a broader scope. The DVA-C02 exam was like doing a speedrun but this felt like finishing off Sigrun on GoW. Ya gotta take your time.

I took the TJ practice exams. It somewhat helped, but having intimate knowledge of VPC and DB concepts would help more.

Passed SAA-C03 – Feedback

Just passed the SAA-C03 exam (864) and wanted to provide some feedback since that was helpful for me when I was browsing here before the exam.

I come from an IT background and have a strong knowledge in the VPC portion so that section was a breeze for me in the preparation process (I had never used AWS before this so everything else was new, but the concepts were somewhat familiar considering my background). I started my preparation about a month ago, and used the Mareek class on Udemy. Once I finished the class and reviewed my notes I moved to Mareek’s 6 practice exams (on Udemy). I wasn’t doing extremely well on the PEs (I passed on 4/6 of the exams with 70s grades) I reviewed the exam questions after each exam and moved on to the next. I also purchased Tutorial Dojo’s 6 exams set but only ended up taking one out of 6 (which I passed).

Overall the practice exams ended up being a lot harder than the real exam which had mostly the regular/base topics: a LOT of S3 stuff and storage in general, a decent amount of migration questions, only a couple questions on VPCs and no ML/AI stuff.

My Study Guide for passing the SAA-C03 exam

Sharing the study guide that I followed when I prepared for the AWS Certified Solutions Architect Associate SAA-C03 exam. I passed this test and thought of sharing a real exam experience in taking this challenging test.

First off: my background – I have 8 years of development.experience and been doing AWS for several project, both personally and at work. Studied for a total of 2 months. Focused on the official Exam Guide, and carefully studied the Task Statements and related AWS services.

SAA-C03 Exam Prep

For my exam prep, I bought the adrian cantrill video coursetutorialsdojo (TD) video course and practice exams. Adrian’s course is just right and highly educational but like others has said, the content is long and cover more than just the exam. Did all of the hands-on labs too and played around some machine learning services in my AWS account.

TD video course is short and a good overall summary of the topics items you’ve just learned. One TD lesson covers multiple topics so the content is highly concise. After I completed doing Adrian’s video course, I used TD’s video course as a refresher, did a couple of their hands-on labs then head on to their practice exams.

For the TD practice exams, I took the exam in chronologically and didn’t jumped back and forth until I completed all tests. I first tried all of the 7 timed-mode tests, and review every wrong ones I got on every attempt., then the 6 review-mode tests and the section/topic-based tests. I took the final-test mode roughly 3 times and this is by far one of the helpful feature of the website IMO. The final-test mode generates a unique set from all TD question bank, so every attempt is challenging for me. I also noticed that the course progress doesn’t move if I failed a specific test, so I used to retake the test that I failed.

The Actual SAA-C03 Exam

The actual AWS exam is almost the same with the ones in the TD tests where:

  • All of the questions are scenario-based

  • There are two (or more) valid solutions in the question, e.g:

    • Need SSL: options are ACM and self-signed URL

    • Need to store DB credentials: options are SSM Parameter Store and Secrets Manager

  • The scenarios are long-winded and asks for:

    • MOST Operationally efficient solution

    • MOST cost-effective

    • LEAST amount overhead

Overall, I enjoyed the exam and felt fully prepared while taking the test, thanks to Adrian and TD, but it doesn’t mean the whole darn thing is easy. You really need to put some elbow grease and keep your head lights on when preparing for this exam. Good luck to all and I hope my study guide helped out anyone who is struggling.

Another Passed SAA-C03?

Just another thread about passing the general exam? I passed SAA-C03 yesterday, would like to share my experience on how I earned the examination.

Background:

– graduate with networking background

– working experience on on-premise infrastructure automation, mainly using ansible, python, zabbix and etc.

– cloud experience, short period like 3-6 months with practice

– provisioned cloud application using terraform in azure and aws

Course that I used fully:

– AWS Certified Solutions Architect – Associate (SAA-C03) | learn.cantri (cantrill.io)

– AWS Certified Solutions Architect Associate Exam – SAA-C03 Study Path (tutorialsdojo.com)

Course that I used partially or little:

– Ultimate AWS Certified Solutions Architect Associate (SAA) | Udemy

– Practice Exams | AWS Certified Solutions Architect Associate | Udemy

Lab that I used:

– Free tier account with cantrill instruction

– Acloudguru lab and sandbox

– Percepio lab

Comment on course:

cantrill course is depth and lot of practical knowledge, like email alias and etc.. check in to know more

tutorialdojo practice exam help me filter the answer and guide me on correct answer. If I am wrong in specific topic, I rewatch cantrill video. However, there is some topics that not covered by cantrill but the guideline/review in practice exam will provide pretty much detail. I did all the other mode before the timed-based, after that get average 850 in timed-based exam, while scoring the final practice exam with 63/65. However, real examination is harder compared to practice exam in my opinion.

udemy course and practice exam, I go through some of them but I think the practice exam is quite hard compared to tutorialdojo.

lab – just get hand dirty and they will make your knowledge deep dive in your brain, my advice is try not only to do copy and paste lab but really read the description for each parameter in aws portal

Advice:

you need to know some general exam topics like how to:

– s3 private access

– ec2 availability

– kinesis product including firehose, data stream, blabla

– iam

My next target will be AWS SAP and CKA, still searching suitable material for AWS SAP but proposed mainly using acloudguru sandbox and homelab to learn the subject, practice with acantrill lab in github.

Good luck anyone!

Passed SAA

I wanted to give my personal experience. I have a background in IT, but I have never worked in AWS previous to 5 weeks ago. I got my Cloud Practitioner in a week and SAA after another 4 weeks of studying (2-4 hours a day). I used Cantril’s Course and Tutorials Dojo Practice Exams. I highly, highly recommend this combo. I don’t think I would have passed without the practice exams, as they are quite difficult. In my opinion, they are much more difficult than the actual exam. They really hit the mark on what kind of content you will see. I got a 777, and that’s with getting 70-80%’s on the practice exams. I probably could have done better, but I had a really rough night of sleep and I came down with a cold. I was really on the struggle bus halfway through the test.

I only had a couple of questions on ML / AI, so make sure you know the differences between them all. Lot’s of S3 and EC2. You really need to know these in and out.

My company is offering stipend’s for each certification, so I’m going straight to developer next.

Recently passed SAA-C03

Just passed my SAA-C03 yesterday with 961 points. My first time doing AWS certification. I used Cantrill’s course. Went through the course materials twice, and took around 6 months to study, but that’s mostly due to my busy schedule. I found his materials very detailed and probably go beyond what you’d need for the actual exam.

I also used Stephane’s practice exams on Udemy. I’d say it’s instrumental in my passing doing these to get used to the type of questions in the actual exams and review missing knowledge. Would not have passed otherwise.

Just a heads-up, there are a few things popped up that I did not see in the course materials or practice exams:

* Lake Formation: question about pooling data from RDS and S3, as well as controlling access.

* S3 Requester Pays: question about minimizing S3 data cost when sharing with a partner.

* Pinpoint journey: question about customer replying to SMS sent-out and then storing their feedback.

Not sure if they are graded or Amazon testing out new parts.

Cheers.

Another SAP-C01-Pass

Received my notification this morning that I passed 811.

Prep Time: 10 weeks 2hrs a day

Materials: Neil Davis videos/practice exam Jon Bonso practice exams White papers Misc YouTube videos Some hands on

Prof Experience: 4 years AWS using main services as architect

AWS Certs: CCP-SSA-DVA-SAP(now)

Thoughts: Exam was way more familiar to me than the Developer Exam. I use very little AWS developer tools but mainly use core AWS services. Neil’s videos were very straightforward, easy to digest, and on point. I was able to watch most of the videos on a plane flight to Vegas.

After video series I started to hit his section based exams, main exam, notes, and followed up with some hands on. I was getting destroyed on some of the exams early on and had to rewatch and research the topics, writing notes. There is a lot of nuance and fine details on the topics, you’ll see this when you take the practice exam. These little details matter.

Bonso’s exam were nothing less than awesome as per usual. Same difficulty and quality as Neil Davis. Followed the same routine with section based followed by final exam. I believe Neil said to aim for 80’s on his final exams to sit for the exam. I’d agree because that’s where I was hitting a week before the exam (mid 80’s). Both Neil and Jon exams were on par with exam difficulty if not a shade more difficult.

The exam itself was very straightforward. My experience is the questions were not overly verbose and were straight to the point as compared to the practice exams I took. I was able to quickly narrow down the questions and make a selection. Flagged 8 questions along the way and had 30min to review all my answers. Unlike some people, I didn’t feel like it was a brain melter and actually enjoyed the challenge. Maybe I’m a sadist who knows.

Advice: Follow Neil’s plan, bone up on weak areas and be confident. These questions have a pattern based upon the domain. Doing the practice exams enough will allow you to see the pattern and then research will confirm your suspicions. You can pass this exam!

Good luck to those preparing now and god speed.

 
AWS Developer Associate DVA-C01 Exam Prep
 
 
 

I Passed AWS Developer Associate Certification DVA-C01 Testimonials

AWS Developer and Deployment Theory: Facts and Summaries and Questions/Answers
AWS Developer Associate DVA-C01 Exam Prep

Passed DVA-C01

Passed the certified developer associate this week.

Primary study was Stephane Maarek’s course on Udemy.

I also used the Practice Exams by Stephane Maarek and Abhishek Singh.

I used Stephane’s course and practice exams for the Solutions Architect Associate as well, and find his course does a good job preparing you to pass the exams.

The practice exams were more challenging than the actual exam, so they are a good gauge to see if you are ready for the exam.

Haven’t decided if I’ll do another associate level certification next or try for the solutions architect professional.

Cleared AWS Certified Developer – Associate (DVA-C01)

 

I cleared Developer associate exam yesterday. I scored 873.
Actual Exam Exp: More questions were focused on mainly on Lambda, API, Dynamodb, cloudfront, cognito(must know proper difference between user pool and identity pool)
3 questions I found were just for redis vs memecached (so maybe you can focus more here also to know exact use case& difference.) other topic were cloudformation, beanstalk, sts, ec2. Exam was mix of too easy and too tough for me. some questions were one liner and somewhere too long.

Resources: The main resources I used was udemy. Course of Stéphane Maarek and practice exams of Neal Davis and Stéphane Maarek. These exams proved really good and they even helped me in focusing the area which I lacked. And they are up to the level to actual exam, I found 3-4 exact same questions in actual exam(This might be just luck ! ). so I feel, the course of stephane is more than sufficient and you can trust it. I have achieved solution architect associate previously so I knew basic things, so I took around 2 weeks for preparation and revised the Stephen’s course as much as possible. Parallelly I gave the mentioned exams as well, which guided me where to focus more.

Thanks to all of you and feel free to comment/DM me, if you think I can help you in anyway for achieving the same.

Another Passed Associate Developer Exam (DVA-C01)

Already had passed the Associate Architect Exam (SA-C03) 3 months ago, so I got much more relaxed to the exam, I did the exam with Pearson Vue at home with no problems. Used Adrian Cantrill for the course together with the TD exams.

Studied 2 weeks a 1-2 hours since there is a big overlap with the associate architect couse, even tho the exam has a different approach, more focused on the Serverless side of AWS. Lots of DynamoDB, Lambda, API Gateway, KMS, CloudFormation, SAM, SSO, Cognito (User Pool and Identity Pool), and IAM role/credentials best practices.

I do think in terms of difficulty it was a bit easier than the Associate Architect, maybe it is made up on my mind as it was my second exam so I went in a bit more relaxed.

Next step is going for the Associate Sys-Ops, I will use Adrian Cantrill and Stephane Mareek courses as it is been said that its the most difficult associate exam.

Passed the SCS-C01 Security Specialty 

Passed the SCS-C01 Security Specialty
Passed the SCS-C01 Security Specialty

Mixture of Tutorial Dojo practice exams, A Cloud Guru course, Neal Davis course & exams helped a lot. Some unexpected questions caught me off guard but with educated guessing, due to the material I studied I was able to overcome them. It’s important to understand:

  1. KMS Keys

    1. AWS Owned Keys

    2. AWS Managed KMS keys

    3. Customer Managed Keys

    4. asymmetrical

    5. symmetrical

    6. Imported key material

    7. What services can use AWS Managed Keys

  2. KMS Rotation Policies

    1. Depending on the key matters the rotation that can be applied (if possible)

  3. Key Policies

    1. Grants (temporary access)

    2. Cross-account grants

    3. Permanent Policys

    4. How permissions are distributed depending on the assigned principle

  4. IAM Policy format

    1. Principles (supported principles)

    2. Conditions

    3. Actions

    4. Allow to a service (ARN or public AWS URL)

    5. Roles

  5. Secrets Management

    1. Credential Rotation

    2. Secure String types

    3. Parameter Store

    4. AWS Secrets Manager

  6. Route 53

    1. DNSSEC

    2. DNS Logging

  7. Network

    1. AWS Network Firewall

    2. AWS WAF (some questions try to trick you into thinking AWS Shield is needed instead)

    3. AWS Shield

    4. Security Groups (Stateful)

    5. NACL (Stateless)

    6. Ephemeral Ports

    7. VPC FlowLogs

  8. AWS Config

    1. Rules

    2. Remediation (custom or AWS managed)

  9. AWS CloudTrail

    1. AWS Organization Trails

    2. Multi-Region Trails

    3. Centralized S3 Bucket for multi-account log aggregation

  10. AWS GuardDuty vs AWS Macie vs AWS Inspector vs AWS Detective vs AWS Security Hub

It gets more in depth, I’m willing to help anyone out that has questions. If you don’t mind joining my Discord to discuss amongst others to help each other out will be great. A study group community. Thanks. I had to repost because of a typo 🙁

https://discord.gg/pZbEnhuEY9

Passed the Security Specialty

Passed Security Specialty yesterday.

Resources used were:

Adrian (for the labs), Jon (For the Test Bank),

Total time spent studying was about a week due to the overlap with the SA Pro I passed a couple weeks ago.

Now working on getting Networking Specialty before the year ends.

My longer term goal is to have all the certs by end of next year.

 

Advanced Networking - Specialty

Advanced Networking – Specialty

Passed AWS Certified advanced networking – Specialty ANS-C01 2 days ago

 

This was a tough exam.

Here’s what I used to get prepped:

Exam guide book by Kam Agahian and group of authors – this just got released and has all you need in a concise manual, it also included 3 practice exams, this is a must buy for future reference and covers ALL current exam topics including container networking, SD-WAN etc.

Stephane Maarek’s Udemy course – it is mostly up-to-date with the main exam topics including TGW, network firewall etc. To the point lectures with lots of hands-on demos which gives you just what you need, highly recommended as well!

Tutorial Dojos practice tests to drive it home – this helped me get an idea of the question wording, so I could train myself to read fast, pick out key words, compare similar answers and build confidence in my knowledge.

Crammed daily for 4 weeks (after work, I have a full time job + family) and went in and nailed it. I do have networking background (15+ years) and I am currently working as a cloud security engineer and I’m working with AWS daily, especially EKS, TGW, GWLB etc.

For those not from a networking background – it would definitely take longer to prep.

Good luck!

 
 
 
 
Azure Fundamentals AZ900 Certification Exam Prep
Azure Fundamentals AZ900 Certification Exam Prep
#Azure #AzureFundamentals #AZ900 #AzureTraining #LeranAzure #Djamgatech

 

Passed AZ-900, SC-900, AI-900, and DP-900 within 6 weeks!

 
Achievement Celebration

What an exciting journey. I think AZ-900 is the hardest probably because it is my first Microsoft certification. Afterwards, the others are fair enough. AI-900 is the easiest.

I generally used Microsoft Virtual Training Day, Cloud Ready Skills, Measureup and John Savill’s videos. Having built a fundamental knowledge of the Cloud, I am planning to do AWS CCP next. Wish me luck!

Passed Azure Fundamentals

 
Learning Material

Hi all,

I passed my Azure fundamentals exam a couple of days ago, with a score of 900/1000. Been meaning to take the exam for a few months but I kept putting it off for various reasons. The exam was a lot easier than I thought and easier than the official Microsoft practice exams.

Study materials;

  • A Cloud Guru AZ-900 fundamentals course with practice exams

  • Official Microsoft practice exams

  • MS learning path

  • John Savill’s AZ-900 study cram, started this a day or two before my exam. (Highly Recommended) https://www.youtube.com/watch?v=tQp1YkB2Tgs&t=4s

Will be taking my AZ-104 exam next.

Azure Administrator AZ104 Certification Exam Prep
Azure Administrator AZ104 Certification Exam Prep

Passed AZ-104 with about a 6 weeks prep

 
Learning Material

Resources =

John Savill’s AZ-104 Exam Cram + Master Class Tutorials Dojo Practice Exams

John’s content is the best out there right now for this exam IMHO. I watched the cram, then the entire master class, followed by the cram again.

The Tutorials Dojo practice exams are essential. Some questions on the actual exam where almost word-for-word what I saw on the exam.

Question:

What’s everyone using for the AZ-305? Obviously, already using John’s content, and from what I’ve read the 305 isn’t too bad.

Thoughts?

Passed the AZ-140 today!!

 
Achievement Celebration

I passed the (updated?) AZ-140, AVD specialty exam today with an 844. First MS certification in the bag!

Edited to add: This video series from Azure Academy was a TON of help.

https://youtube.com/playlist?list=PL-V4YVm6AmwW1DBM25pwWYd1Lxs84ILZT

Passed DP-900

 
Achievement Celebration

I am pretty proud of this one. Databases are an area of IT where I haven’t spent a lot of time, and what time I have spent has been with SQL or MySQL with old school relational databases. NoSQL was kinda breaking my brain for a while.

Study Materials:

  1. Microsoft Virtual Training Day, got the voucher for the free exam. I know several people on here said that was enough for them to pass the test, but that most certainly was not enough for me.

  2. Exampro.co DP-900 course and practice test. They include virtual flashcards which I really liked.

  3. Whizlabs.com practice tests. I also used the course to fill in gaps in my testing.

Passed AI-900! Tips & Resources Included!!

Azure AI Fundamentals AI-900 Exam Prep
Azure AI Fundamentals AI-900 Exam Prep
 
Achievement Celebration

Huge thanks to this subreddit for helping me kick start my Azure journey. I have over 2 decades of experience in IT and this is my 3rd Azure certification as I already have AZ-900 and DP-900.

Here’s the order in which I passed my AWS and Azure certifications:

SAA>DVA>SOA>DOP>SAP>CLF|AZ-900>DP-900>AI-900

I have no plans to take this certification now but had to as the free voucher is expiring in a couple of days. So I started preparing on Friday and took the exam on Sunday. But give it more time if you can.

Here’s my study plan for AZ-900 and DP-900 exams:

  • finish a popular video course aimed at the cert

  • watch John Savill’s study/exam cram

  • take multiple practice exams scoring in 90s

This is what I used for AI-900:

  • Alan Rodrigues’ video course (includes 2 practice exams) 👌

  • John Savill’s study cram 💪

  • practice exams by Scott Duffy and in 28Minutes Official 👍

  • knowledge checks in AI modules from MS learn docs 🙌

I also found the below notes to be extremely useful as a refresher. It can be played multiple times throughout your preparation as the exam cram part is just around 20 minutes.

https://youtu.be/utknpvV40L0 👏

Just be clear on the topics explained by the above video and you’ll pass AI-900. I advise you to watch this video at the start, middle and end of your preparation. All the best in your exam

Just passed AZ-104

 
Achievement Celebration

I recommend to study networking as almost all of the questions are related to this topic. Also, AAD is a big one. Lots of load balancers, VNET, NSGs.

Received very little of this:

  • Containers

  • Storage

  • Monitoring

I passed with a 710 but a pass is a pass haha.

Used tutorial dojos but the closest questions I found where in the Udemy testing exams.

Regards,

Passed GCP Professional Cloud Architect

Google Professional Cloud Architect Practice Exam 2022
Google Professional Cloud Architect Practice Exam 2022
 

First of all, I would like to start with the fact that I already have around 1 year of experience with GCP in depth, where I was working on GKE, IAM, storage and so on. I also obtained GCP Associate Cloud Engineer certification back in June as well, which helps with the preparation.

I started with Dan Sullivan’s Udemy course for Professional Cloud Architect and did some refresher on the topics I was not familiar with such as BigTable, BigQuery, DataFlow and all that. His videos on the case studies helps a lot to understand what each case study scenario requires for designing the best cost-effective architecture.

In order to understand the services in depth, I also went through the GCP documentation for each service at least once. It’s quite useful for knowing the syntax of the GCP commands and some miscellaneous information.

As for practice exam, I definitely recommend Whizlabs. It helped me prepare for the areas I was weak at and helped me grasp the topics a lot faster than reading through the documentation. It will also help you understand what kind of questions will appear for the exam.

I used TutorialsDojo (Jon Bonso) for preparation for Associate Cloud Engineer before and I can attest that Whizlabs is not that good. However, Whizlabs still helps a lot in tackling the tough questions that you will come across during the examination.

One thing to note is that, there wasn’t even a single question that was similar to the ones from Whizlabs practice tests. I am saying this from the perspective of the content of the questions. I got totally different scenarios for both case study and non case study questions. Many questions focused on App Engine, Data analytics and networking. There were some Kubernetes questions based on Anthos, and cluster networking. I got a tough question regarding storage as well.

I initially thought I would fail, but I pushed on and started tackling the multiple-choices based on process of elimination using the keywords in the questions. 50 questions in 2 hours is a tough one, especially due to the lengthy questions and multiple choices. I do not know how this compares to AWS Solutions Architect Professional exam in toughness. But some people do say GCP professional is tougher than AWS.

All in all, I still recommend this certification to people who are working with GCP. It’s a tough one to crack and could be useful for future prospects. It’s a bummer that it’s only valid for 2 years.

GCP Associate Cloud Engineer Exam Prep

Passed GCP: Cloud Digital Leader

Hi everyone,

First, thanks for all the posts people share. It helps me prep for my own exam. I passed the GCP: Cloud Digital Leader exam today and wanted to share a few things about my experience.

Preparation

I have access to ACloudGuru (AGU)and Udemy through work. I started one of the Udemy courses first, but it was clear the course was going beyond the scope of the Cloud Digital Leader certification. I switched over AGU and enjoyed the content a lot more. The videos were short and the instructor hit all the topics on the Google exam requirements sheet.

AGU also has three – 50 question practices test. The practice tests are harder than the actual exam (and the practice tests aren’t that hard).

I don’t know if someone could pass the test if they just watched the videos on Google Cloud’s certification site, especially if you had no experience with GCP.

Overall, I would say I spent 20 hrs preparing for the exam. I have my CISSP and I’m working on my CCSP. After taking the test, I realized I way over prepared.

Exam Center

It was my first time at this testing center and I wasn’t happy with the experience. A few of the issues I had are:

– My personal items (phone, keys) were placed in an unlocked filing cabinet

– My desk are was dirty. There were eraser shreds (or something similar) and I had to move the keyboard and mouse and brush all the debris out of my work space

– The laminated sheet they gave me looked like someone had spilled Kool-Aid on it

– They only offered earplugs, instead of noise cancelling headphones

Exam

My recommendation for the exam is to know the Digital Transformation piece as well as you know all the GCP services and what they do.

I wish you all luck on your future exams. Onto GCP: Associate Cloud Engineer.

Passed the Google Cloud: Associate Cloud Engineer

Hey all, I was able to pass the Google Cloud: Associate Cloud Engineer exam in 27 days.

I studied about 3-5 hours every single day.

I created this note to share with the resources I used to pass the exam.

Happy studying!

GCP ACE Exam Aced

Hi folks,

I am glad to share with you that I have cleared by GCP ACE exam today and would like to share my preparation with you:

1)I completed these courses from Coursera:

1.1 Google Cloud Platform Fundamentals – Core Infrastructure

1.2 Essential Cloud Infrastructure: Foundation

1.3 Essential Cloud Infrastructure: Core Services

1.4 Elastic Google Cloud Infrastructure: Scaling and Automation

Post these courses, I did couple of QwikLab courses as listed in orderly manner:

2 Getting Started: Create and Manage Cloud Resources (Qwiklabs Quest)

   2.1 A Tour of Qwiklabs and Google Cloud

   2.2 Creating a Virtual Machine

   2.2 Compute Engine: Qwik Start – Windows

   2.3 Getting Started with Cloud Shell and gcloud

   2.4 Kubernetes Engine: Qwik Start

   2.5 Set Up Network and HTTP Load Balancers

   2.6 Create and Manage Cloud Resources: Challenge Lab

 3 Set up and Configure a Cloud Environment in Google Cloud (Qwiklabs Quest)

   3.1 Cloud IAM: Qwik Start

   3.2 Introduction to SQL for BigQuery and Cloud SQL

   3.3 Multiple VPC Networks

   3.4 Cloud Monitoring: Qwik Start

   3.5 Deployment Manager – Full Production [ACE]

   3.6 Managing Deployments Using Kubernetes Engine

   3.7 Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab

 4 Kubernetes in Google Cloud (Qwiklabs Quest)

   4.1 Introduction to Docker

   4.2 Kubernetes Engine: Qwik Start

   4.3 Orchestrating the Cloud with Kubernetes

   4.4 Managing Deployments Using Kubernetes Engine

   4.5 Continuous Delivery with Jenkins in Kubernetes Engine

Post these courses I did the following for mock exam preparation:

  1. Jon Bonso Tutorial Dojo -GCP ACE preparation

  2. Udemy course:

https://www.udemy.com/course/google-associate-cloud-engineer-practice-exams-2021-d/learn/quiz/5278722/results?expanded=591254338#overview

And yes folks this took me 3 months to prepare. So take your time and prepare it.

#djamgatech #aws #azure #gcp #ccp #az900 #saac02 #saac03 #az104 #azai #dasc01 #mlsc01 #scsc01 #azurefundamentals #awscloudpractitioner #solutionsarchitect #datascience #machinelearning #azuredevops #awsdevops #az305 #ai900 #DP900 #GCPACE

Comparison of AWS vs Azure vs Google

Cloud computing has revolutionized the way companies develop applications. Most of the modern applications are now cloud native. Undoubtedly, the cloud offers immense benefits like reduced infrastructure maintenance, increased availability, cost reduction, and many others.

However, which cloud vendor to choose, is a challenge in itself. If we look at the horizon of cloud computing, the three main providers that come to mind are AWS, Azure, and Google cloud. Today, we will compare the top three cloud giants and see how they differ. We will compare their services, specialty, and pros and cons. After reading this article, you will be able to decide which cloud vendor is best suited to your needs and why.

History and establishment

AWS

AWS is the oldest player in the market, operating since 2006. Here’s a brief history of AWS and how computing has changed. Being the first in the cloud industry, it has gained a particular advantage over its competitors. It offers more than 200+ services to its users. Some of its notable clients include:

  • Netflix
  • Expedia
  • Airbnb
  • Coursera
  • FDA
  • Coca Cola

Azure

Azure by Microsoft started in 2010. Although it started four years later than AWS, it is catching up quite fast. Azure is Microsoft’s public cloud platform which is why many companies prefer to use Azure for their Microsoft-based applications. It also offers more than 200 services and products. Some of its prominent clients include:

  • HP
  • Asus
  • Mitsubishi
  • 3M
  • Starbucks
  • CDC (Center of Disease Control) USA
  • National health service (NHS) UK

Google

Google Cloud also started in 2010. Its arsenal of cloud services is relatively smaller compared to AWS or Azure. It offers around 100+ services. However, its services are robust, and many companies embrace Google cloud for its specialty services. Some of its noteworthy clients include:

  • PayPal
  • UPS
  • Toyota
  • Twitter
  • Spotify
  • Unilever

Market share & growth rate

If you look at the market share and growth chart below, you will notice that AWS has been leading for more than four years. Azure is also expanding fast, but it is still has a long way to go to catch up with AWS.

However, in terms of revenue, Azure is ahead of AWS. In Q1 2022, AWS revenue was $18.44 billion; Azure earned $23.4 billion, while Google cloud earned $5.8 billion.

Availability Zones (Data Centers)

When comparing cloud vendors, it is essential to see how many regions and availability zones are offered. Here is a quick comparison between all three cloud vendors in terms of regions and data centers:

AWS

AWS operates in 25 regions and 81 availability zones. It offers 218+ edge locations and 12 regional edge caches as well. You can utilize the edge location and edge caches in services like AWS Cloudfront and global accelerator, etc.

Azure

Azure has 66 regions worldwide and a minimum of three availability zones in each region. It also offers more than 116 edge locations.

Google

Google has a presence in 27 regions and 82 availability zones. It also offers 146 edge locations.

Although all three cloud giants are continuously expanding. Both AWS and Azure offer data centers in China to specifically cater for Chinese consumers. At the same time, Azure seems to have broader coverage than its competitors.

Comparison of common cloud services

Let’s look at the standard cloud services offered by these vendors.

Compute

Amazon’s primary compute offering is EC2 instances, which are very easy to operate. Amazon also provides a low-cost option called “Amazon lightsail” which is a perfect fit for those who are new to computing and have a limited budget. AWS charges for EC2 instances only when you are using them. Azure’s compute offering is also based on virtual machines. Google is no different and offers virtual machines in Google’s data centers. Here’s a brief comparison of compute offerings of all three vendors:

Storage

All three vendors offer various forms of storage, including object-based storage, cold storage, file-based storage, and block-based storage. Here’s a brief comparison of all three:

Database

All three vendors support managed services for databases. They also offer NoSQL as well as document-based databases. AWS also provides a proprietary RDBMS named “Aurora”, a highly scalable and fast database offering for both MySQL and PostGreSQL. Here’s a brief comparison of all three vendors:

Comparison of Specialized services

All three major cloud providers are competing with each other in the latest technologies. Some notable areas of competition include ML/AI, robotics, DevOps, IoT, VR/Gaming, etc. Here are some of the key specialties of all three vendors.

AWS

Being the first and only one in the cloud market has many benefits, and Amazon has certainly taken advantage of that. Amazon has advanced specifically in AI and machine learning related tools. AWS DeepLens is an AI-powered camera that you can use to develop and deploy machine learning algorithms. It helps you with OCR and image recognition. Similarly, Amazon has launched an open source library called “Gluon” which helps with deep learning and neural networks. You can use this library to learn how neural networks work, even if you lack any technical background. Another service that Amazon offers is SageMaker. You can use SageMaker to train and deploy your machine learning models. It contains the Lex conversational interface, which is the backbone of Alexa, Lambda, and Greengrass IoT messaging services.

Another unique (and recent) offering from AWS is IoT twinmaker. This service can create digital twins of real-world systems like factories, buildings, production lines, etc.

AWS is even providing a service for Quantum computing called AWS Braket.

Azure

Azure excels where you are already using some Microsoft products, especially on-premises Microsoft products. Organizations already using Microsoft products prefer to use Azure instead of other cloud vendors because Azure offers a better and more robust integration with Microsoft products.

Azure has excellent services related to ML/AI and cognitive services. Some notable services include Bing web search API, Face API, Computer vision API, text analytics API, etc.

Google

Google is the current leader of all cloud providers regarding AI. This is because of their open-source Google library TensorFlow, the most popular library for developing machine learning applications. Vertex AI and BigQueryOmni are also beneficial services offered lately. Similarly, Google offers rich services for NLP, translation, speech, etc.

Pros and Cons

Let’s summarize the pros and cons for all three cloud vendors:

AWS

Pros:

  • An extensive list of services
  • Huge market share
  • Support for large businesses
  • Global reach

Cons:

  • Pricing model. Many companies struggle to understand the cost structure. Although AWS has improved the UX of its cost-related reporting in the AWS console, many companies still hesitate to use AWS because of a perceived lack of cost transparency

Azure

Pros:

  • Excellent integration with Microsoft tools and software
  • Broader feature set
  • Support for open source

Cons:

  • Geared towards enterprise customers

Google

Pros:

  • Strong integration with open source tools
  • Flexible contracts
  • Good DevOps services
  • The most cost-efficient
  • The preferred choice for startups
  • Good ML/AI-based services

Cons:

  • A limited number of services as compared to AWS and Azure
  • Limited support for enterprise use cases

Career Prospects

Keen to learn which vendor’s cloud certification you should go for ? Here is a brief comparison of the top three cloud certifications and their related career prospects:

AWS

As mentioned earlier, AWS has the largest market share compared to other cloud vendors. That means more companies are using AWS, and there are more vacancies in the market for AWS-certified professionals. Here are main reasons why you would choose to learn AWS:

Azure

Azure is the second largest cloud service provider. It is ideal for companies that are already using Microsoft products. Here are the top reasons why you would choose to learn Azure:

  • Ideal for experienced user of Microsoft services
  • Azure certifications rank among the top paying IT certifications
  • If you’re applying for a company that primarily uses Microsoft Services

Google

Although Google is considered an underdog in the cloud market, it is slowly catching up. Here’s why you may choose to learn GCP.

  • While there are fewer job postings, there is also less competition in the market
  • GCP certifications rank among the top paying IT certifications

Most valuable IT Certifications

Keen to learn about the top paying cloud certifications and jobs? If you look at the annual salary figures below, you can see the average salary for different cloud vendors and IT companies, no wonder AWS is on top. A GCP cloud architect is also one of the top five. The Azure architect comes at #9.

Which cloud certification to choose depends mainly on your career goals and what type of organization you want to work for. No cloud certification path is better than the other. What matters most is getting started and making progress towards your career goals. Even if you decide at a later point in time to switch to a different cloud provider, you’ll still benefit from what you previously learned.

Over time, you may decide to get certified in all three – so you can provide solutions that vary from one cloud service provider to the next.

Don’t get stuck in analysis-paralysis! If in doubt, simply get started with AWS certifications that are the most sought-after in the market – especially if you are at the very beginning of your cloud journey. The good news is that you can become an AWS expert when enrolling in our value-packed training.

Further Reading

You may also be interested in the following articles:

https://digitalcloud.training/entry-level-cloud-computing-jobs-roles-and-responsibilities/https://digitalcloud.training/aws-vs-azure-vs-google-cloud-certifications-which-is-better/https://digitalcloud.training/10-tips-on-how-to-enter-the-cloud-computing-industry/https://digitalcloud.training/top-paying-cloud-certifications-and-jobs/https://digitalcloud.training/are-aws-certifications-worth-it/

Source:

https://digitalcloud.training/comparison-of-aws-vs-azure-vs-google/


Get it on Apple Books
Get it on Apple Books

  • Passed AZ-700!
    by /u/icebreaker374 (Microsoft Azure Certifications) on April 20, 2024 at 12:19 am

    That was a HARD exam! The WAF, AppGW, and FW questions are what tripped me up. Got a 711 on it so I'm happy. I also have my AZ-500 and AZ-900. What's the 104 like? Lot of PS/CLI or a variety of things? submitted by /u/icebreaker374 [link] [comments]

  • Passed DP-203!
    by /u/Cleveland_Steve (Microsoft Azure Certifications) on April 19, 2024 at 8:31 pm

    After a horrible testing experience, I passed DP-203 today! I chose to take the exam at a Person testing center. I have never had a problem taking my exams this way but today was a complete mess. There were many people taking exams, so there were people constantly entering and leaving the room. Then there was someone two computers away who needed to listen to a video for their exam. The problem was that their headset was not working. The proctor decided to unplug the head set and let the audio play through the speakers on the monitor. So, as I’m trying to focus on my questions, I am trying to block out that playing in the background. When I was about halfway through my test, the testing center started to have internet connection issues. Everyone’s exams were completely freezing and we were just looking at each other as the proctors ran around in a panic. Eventually the exams re-connected and continued, but there seemed to be lag throughout the rest of the exam. Towards the end of my exam, I felt like I was not performing very well with all of the problems. I thought I failed. When I clicked the finish exam button and the “Congratulations!” screen appeared I could not believe my eyes. I’m so glad that one is over. submitted by /u/Cleveland_Steve [link] [comments]

  • ADHD accomodation for exams
    by /u/Personal-Ad9152 (Microsoft Azure Certifications) on April 19, 2024 at 7:18 pm

    I was just told about this from a coworker. You can request up to an extra 100% of the time allowed on the exam if you have certain diagnosis. I have adhd and all the paperwor that shows my diagnosis. I was wondering if anyone has ever applied for the exemption and what process was like. I applied and my app is at tier4, started at tier 1 on monday. Any insight is appreciated. Thanks yall. submitted by /u/Personal-Ad9152 [link] [comments]

  • Azure Resume Challenge Question: Custom Domain and GoDaddy
    by /u/d-weezy2284 (Microsoft Azure Certifications) on April 19, 2024 at 7:00 pm

    So I have my CDN endpoint xxxx.azureedge.net that needs to be added as a DNS record to my GoDaddy domain as a CNAME type named WWW with data of xxxx.azureedge.net but I can't add it as there is already a CNAME named WWW. I don't experience this issue if it was a static website that wasn't based off of blob storage as I could just enter an A record. Would it be better to just simply forward the GoDaddy domain to the endpoint or to create a DNS zone in Azure and just update the name servers in GoDaddy or am I missing something obvious and haven't dug far enough? submitted by /u/d-weezy2284 [link] [comments]

  • James Lee Azure courses offered in Adrian Cantrill bundle worth it?
    by /u/ascension1110 (Microsoft Azure Certifications) on April 19, 2024 at 4:23 pm

    We know the quality of Adrian courses for AWS is excellent and detailed. I've courses from Adrian for AWS and was thinking about the courses offered by James Lee for Azure on Cantrill.io , if Azure courses are also worth buying? Really appreciate your inputs submitted by /u/ascension1110 [link] [comments]

  • Are Azure cloud engineer jobs in NYC or remote in US devops or site relability engineer jobs, and involve reading documentation of APIs like a developer would have to read them, or not? Do you have to know a lot of programming like a developer or only basic scripting?
    by /u/total_cornerstone (Microsoft Azure Certifications) on April 19, 2024 at 5:37 am

    I would like to not read documentation of APIs if I can do that. Do you not have to do that, or do that much, even at a devops or SRE job? Like would you have to read something like this as a cloud engineer, devops engineer or SRE: Docs: API Reference, Tutorials, and Integration | Twilio I'm curious about this. Thanks. submitted by /u/total_cornerstone [link] [comments]

  • Day left to renew az104
    by /u/sectestpen1 (Microsoft Azure Certifications) on April 18, 2024 at 8:59 pm

    Had a loss in the family and flew to Europe. Came back and got notified that certification is lapsing within 2 days. I did the assessment with googling and got 54%, and the needed 57%. I googled 3 different sources and chatgpt, so not sure how they all got answers wrong. It can pass the bar, but not az104. Got time for 1 more attempt. Is there a more concise study guide other than their 23 hours worth of materials? submitted by /u/sectestpen1 [link] [comments]

  • Passed the AZ-305 certification
    by /u/sabhy (Microsoft Azure Certifications) on April 18, 2024 at 8:38 pm

    I am really happy that I could finally get certified. I scored 850 on the exam which is similar to what I was averaging on practice tests. I passed the Az-104 back in November 2023 and was trying to decide whether I want to do Az-400 or AZ-305 next. Finally decided on Az-305 as I want to be more generalist on the azure platform. The exam was not hard but the questions were asked in such a way that multiple answers seemed plausible. The case study was the hardest in my opinion. I do alot of architectural work on Azure at my job so I guess that helped me alot. To those attempting AZ-305, I would advise to spend alot of time on Azure portal creating labs and playing with multiple azure products. submitted by /u/sabhy [link] [comments]

  • What can I expect in AZ900?
    by /u/Basaker (Microsoft Azure Certifications) on April 18, 2024 at 3:59 pm

    I'm about to take AZ900 what can I expect? are there like the practice exams on Microsoft Learn? is it multiple choice? submitted by /u/Basaker [link] [comments]

  • Passed the AZ-900 just now! Sharing what I did below
    by /u/ShooBum-T (Microsoft Azure Certifications) on April 18, 2024 at 12:54 pm

    Going through Microsoft learn course for AZ900 I would say, is pointless, as it in itself is not sufficient. Take any YouTube or Udemy course, run through it once without going in too much details and then just bombard yourself with Practice Tests. I just did these two Udemy Courses for tests. https://www.udemy.com/course/microsoft-azure-az-900-practice-tests-latest-2020/learn/quiz/4900690#overview https://www.udemy.com/course/az900-azure-tests/learn/quiz/4700490/results?expanded=1096958798#overview After that did the Practice Test on Microsoft Learn , actual test was quite similar. Pretty basic stuff. Anyways thats what I did. Now onto AZ-204. P.S : If anyone could tell me where I would be able to download the certificate , that'd be great. Thanks. 😂 submitted by /u/ShooBum-T [link] [comments]

  • AZ 104 50% on first practice exam
    by /u/flyingflapjacks22 (Microsoft Azure Certifications) on April 18, 2024 at 12:33 pm

    submitted by /u/flyingflapjacks22 [link] [comments]

  • Azure Certification Exam
    by /u/techexpert2018 (Microsoft Azure Certifications) on April 18, 2024 at 12:11 pm

    Dear all, I have schedule an Azure Exam on 30th of this April but if i wants to take exam without reschedule , while selecting Go to the exam will that works or not , please advise https://preview.redd.it/jszp82xxb8vc1.jpg?width=1294&format=pjpg&auto=webp&s=9b82f6cb5cc7167e6901d07195eb1e66bca8e223 submitted by /u/techexpert2018 [link] [comments]

  • why is my payment method not being processed
    by /u/logicdata (Microsoft Azure Certifications) on April 18, 2024 at 10:43 am

    https://preview.redd.it/1lfqob6ov7vc1.png?width=2466&format=png&auto=webp&s=cb7c56b4e298e182772c21451db2630b0411221e i keep getting this payment error no matter the payment method i try, i need help please submitted by /u/logicdata [link] [comments]

  • whats the easiest associate certification?
    by /u/batmanhasacold (Microsoft Azure Certifications) on April 18, 2024 at 10:22 am

    I know the title is slightly misleading, I suppose what may be better worded is where would you rank x certication in terms. of difficulty. for example. AZ-104 being the easiest or hardest and in comparison to AZ-204, 500 etc. of course its also highly dependant on everyone's experiences so lets set a baseline. presuming as a baseline of experience - fundemntal knowledge of azure and its services. something akin to AZ-900 where would you rank other ceriticaitons and what would be your input for relevance in role presuming its a cloud, devops, or even IT Support. lvl 1/2/3 etc. curious to hear what/where people would rank them. submitted by /u/batmanhasacold [link] [comments]

  • Got my Data Fundamentals certification. Would it better to get DP 100 next or AWS Cloud Practitioner ?
    by /u/m3m3zzz (Microsoft Azure Certifications) on April 18, 2024 at 4:03 am

    I'm a college student and which one would help me most in my job search ? submitted by /u/m3m3zzz [link] [comments]

  • Up-to-Date Instructional Screenshots and Videos
    by /u/d-weezy2284 (Microsoft Azure Certifications) on April 17, 2024 at 6:44 pm

    I apologize if this just comes off as a rant post, but does anyone have documentation of common Azure actions (S2S Connections, Azure Front Door, etc) that is a little more up to date in terms of it's pictures and naming in the step-by-step guides? In trying to learn and go through projects, the directions seem to be dated in terms of their screenshots and instructions of what is a mandatory field when using the Azure portal when the dates on things seem to be a little over a year ago. Even MS Learn documents have things that are no longer "quite right". Maybe it's just how my brain works, but having to figure out what something is now called from a document takes away from me remembering what I'm actually supposed to know and do when I get to those points in guidance. submitted by /u/d-weezy2284 [link] [comments]

  • AZ-104: Which Microsoft Labs should I pair with which Savill videos?
    by /u/dontforgetthesalsa (Microsoft Azure Certifications) on April 17, 2024 at 5:53 pm

    I need some advice for preparing for the AZ-104 that should be able to help other fellow students as well. I passed the AZ-900 a few weeks ago with a 960 score and used the John Savill az900 playlist plus exam cram to study. Now I want to prepare for the 104. I really like Savill and want to keep using his videos, but I recognize that for both the exam as well as my job, I need some hands on labs to practice. I found these Microsoft guided labs that seem like they would be quite helpful: https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/ . I figure that the best way to go about this is to watch the Savill videos, and then do the related labs immediately after. My question is, which of those labs should I pair with each Savill video? Here is a link to the Savill playlist I am referring to: https://www.youtube.com/playlist?list=PLlVtbbG169nGccbp8VSpAozu3w9xSQJoY submitted by /u/dontforgetthesalsa [link] [comments]

  • GCP Professional Cloud DevOps Engineer Exam
    by /u/Beginning_Ad_3972 (Google Cloud Platform Certification) on April 17, 2024 at 4:50 pm

    I'm gearing up to prep for the GCP Professional Cloud DevOps Engineer certification. Can anyone recommend top-notch resources, practice questions, and study materials that closely mimic the exam's question patterns? Thanks in advance! submitted by /u/Beginning_Ad_3972 [link] [comments]

  • Can't decide between AI-102 and DP-100
    by /u/Prize_Barracuda_5060 (Microsoft Azure Certifications) on April 17, 2024 at 3:11 pm

    I've completed the on going AI skills challenge and have the option of either giving the AI-102 exam or the DP-100 exam and I'm confused as to which one to pick and which one is easier. I already have the AI-900 and my background is in web development (Mern and Next.js) I'm currently an undergrad CS student in my final year if that helps. I think both of them are quite similar and I'm leaning towards the AI-102 exam but don't know how hard it would be as I would rather choose the easier of the two. submitted by /u/Prize_Barracuda_5060 [link] [comments]

  • Sc-200
    by /u/Foreign_Dragonfly_12 (Microsoft Azure Certifications) on April 17, 2024 at 8:44 am

    Hello guys, I got a voucher to SC-200. Sorry if I’m repeating any other post, but what content do you recommend for studying? Thanks submitted by /u/Foreign_Dragonfly_12 [link] [comments]


Top-paying Cloud certifications:

Google Certified Professional Cloud Architect — $175,761/year
AWS Certified Solutions Architect – Associate — $149,446/year
Azure/Microsoft Cloud Solution Architect – $141,748/yr
Google Cloud Associate Engineer – $145,769/yr
AWS Certified Cloud Practitioner — $131,465/year
Microsoft Certified: Azure Fundamentals — $126,653/year
Microsoft Certified: Azure Administrator Associate — $125,993/year

Top 100 AWS Solutions Architect Associate Certification Exam Questions and Answers Dump SAA-C03

How do we know that the Top 3 Voice Recognition Devices like Siri Alexa and Ok Google are not spying on us?

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

Data Science Bias Variance Trade-off

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Below and the Top 100 Data Science and Data Analytics Interview Questions and Answers dumps.

What is Data Science? 

Data Science is a blend of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data. How is this different from what statisticians have been doing for years? The answer lies in the difference between explaining and predicting: statisticians work a posteriori, explaining the results and designing a plan; data scientists use historical data to make predictions.

Top 100 Data Science and Data Analytics and  Data Engineering Interview Questions and Answers
AWS Data analytics DAS-C01 Exam Prep
 
 
 
 
AWS Data analytics DAS-C01 Exam Prep PRO App:
Very Similar to real exam, Countdown timer, Score card, Show/Hide Answers, Cheat Sheets, FlashCards, Detailed Answers and References
No ADS, Access All Quiz Detailed Answers, Reference and Score Card
 
 

How does data cleaning play a vital role in the analysis? 

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Data cleaning can help in analysis because:

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

What is linear regression? What do the terms p-value, coefficient, and r-squared value mean? What is the significance of each of these components?

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

Reference  

Imagine you want to predict the price of a house. That will depend on some factors, called independent variables, such as location, size, year of construction… if we assume there is a linear relationship between these variables and the price (our dependent variable), then our price is predicted by the following function: Y = a + bX
The p-value in the table is the minimum I (the significance level) at which the coefficient is relevant. The lower the p-value, the more important is the variable in predicting the price. Usually we set a 5% level, so that we have a 95% confidentiality that our variable is relevant.
The p-value is used as an alternative to rejection points to provide the smallest level of significance at which the null hypothesis would be rejected. A smaller p-value means that there is stronger evidence in favor of the alternative hypothesis.
The coefficient value signifies how much the mean of the dependent variable changes given a one-unit shift in the independent variable while holding other variables in the model constant. This property of holding the other variables constant is crucial because it allows you to assess the effect of each variable in isolation from the others.
R squared (R2) is a statistical measure that represents the proportion of the variance for a dependent variable that’s explained by an independent variable or variables in a regression model.

Credit: Steve Nouri


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

What is sampling? How many sampling methods do you know? 

Reference

 

Data sampling is a statistical analysis technique used to select, manipulate and analyze a representative subset of data points to identify patterns and trends in the larger data set being examined. It enables data scientists, predictive modelers and other data analysts to work with a small, manageable amount of data about a statistical population to build and run analytical models more quickly, while still producing accurate findings.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Sampling can be particularly useful with data sets that are too large to efficiently analyze in full – for example, in big data analytics applications or surveys. Identifying and analyzing a representative sample is more efficient and cost-effective than surveying the entirety of the data or population.
An important consideration, though, is the size of the required data sample and the possibility of introducing a sampling error. In some cases, a small sample can reveal the most important information about a data set. In others, using a larger sample can increase the likelihood of accurately representing the data as a whole, even though the increased size of the sample may impede ease of manipulation and interpretation.
There are many different methods for drawing samples from data; the ideal one depends on the data set and situation. Sampling can be based on probability, an approach that uses random numbers that correspond to points in the data set to ensure that there is no correlation between points chosen for the sample. Further variations in probability sampling include:

Simple random sampling: Software is used to randomly select subjects from the whole population.
• Stratified sampling: Subsets of the data sets or population are created based on a common factor,
and samples are randomly collected from each subgroup. A sample is drawn from each strata (using a random sampling method like simple random sampling or systematic sampling).
o EX: In the image below, let’s say you need a sample size of 6. Two members from each
group (yellow, red, and blue) are selected randomly. Make sure to sample proportionally:
In this simple example, 1/3 of each group (2/6 yellow, 2/6 red and 2/6 blue) has been
sampled. If you have one group that’s a different size, make sure to adjust your
proportions. For example, if you had 9 yellow, 3 red and 3 blue, a 5-item sample would
consist of 3/9 yellow (i.e. one third), 1/3 red and 1/3 blue.
• Cluster sampling: The larger data set is divided into subsets (clusters) based on a defined factor, then a random sampling of clusters is analyzed. The sampling unit is the whole cluster; Instead of sampling individuals from within each group, a researcher will study whole clusters.
o EX: In the image below, the strata are natural groupings by head color (yellow, red, blue).
A sample size of 6 is needed, so two of the complete strata are selected randomly (in this
example, groups 2 and 4 are chosen).

Data Science Stratified Sampling - Cluster Sampling
Data Science Stratified Sampling – Cluster Sampling

– Cluster Sampling

  • Multistage sampling: A more complicated form of cluster sampling, this method also involves dividing the larger population into a number of clusters. Second-stage clusters are then broken out based on a secondary factor, and those clusters are then sampled and analyzed. This staging could continue as multiple subsets are identified, clustered and analyzed.
    • Systematic sampling: A sample is created by setting an interval at which to extract data from the larger population – for example, selecting every 10th row in a spreadsheet of 200 items to create a sample size of 20 rows to analyze.

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Sampling can also be based on non-probability, an approach in which a data sample is determined and extracted based on the judgment of the analyst. As inclusion is determined by the analyst, it can be more difficult to extrapolate whether the sample accurately represents the larger population than when probability sampling is used.

Non-probability data sampling methods include:
• Convenience sampling: Data is collected from an easily accessible and available group.
• Consecutive sampling: Data is collected from every subject that meets the criteria until the predetermined sample size is met.
• Purposive or judgmental sampling: The researcher selects the data to sample based on predefined criteria.
• Quota sampling: The researcher ensures equal representation within the sample for all subgroups in the data set or population (random sampling is not used).

Quota sampling
Quota sampling

Once generated, a sample can be used for predictive analytics. For example, a retail business might use data sampling to uncover patterns about customer behavior and predictive modeling to create more effective sales strategies.

Credit: Steve Nouri

What are the assumptions required for linear regression?

There are four major assumptions:

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

There is a linear relationship between the dependent variables and the regressors, meaning the model you are creating actually fits the data,
• The errors or residuals of the data are normally distributed and independent from each other,
• There is minimal multicollinearity between explanatory variables, and
• Homoscedasticity. This means the variance around the regression line is the same for all values of the predictor variable.

What is a statistical interaction?

Reference: Statistical Interaction

Basically, an interaction is when the effect of one factor (input variable) on the dependent variable (output variable) differs among levels of another factor. When two or more independent variables are involved in a research design, there is more to consider than simply the “main effect” of each of the independent variables (also termed “factors”). That is, the effect of one independent variable on the dependent variable of interest may not be the same at all levels of the other independent variable. Another way to put this is that the effect of one independent variable may depend on the level of the other independent
variable. In order to find an interaction, you must have a factorial design, in which the two (or more) independent variables are “crossed” with one another so that there are observations at every
combination of levels of the two independent variables. EX: stress level and practice to memorize words: together they may have a lower performance. 

What is selection bias? 

Reference

Selection (or ‘sampling’) bias occurs when the sample data that is gathered and prepared for modeling has characteristics that are not representative of the true, future population of cases the model will see.
That is, active selection bias occurs when a subset of the data is systematically (i.e., non-randomly) excluded from analysis.

Selection bias is a kind of error that occurs when the researcher decides what has to be studied. It is associated with research where the selection of participants is not random. Therefore, some conclusions of the study may not be accurate.

The types of selection bias include:
Sampling bias: It is a systematic error due to a non-random sample of a population causing some members of the population to be less likely to be included than others resulting in a biased sample.
Time interval: A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.
Data: When specific subsets of data are chosen to support a conclusion or rejection of bad data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
Attrition: Attrition bias is a kind of selection bias caused by attrition (loss of participants)
discounting trial subjects/tests that did not run to completion.

What is an example of a data set with a non-Gaussian distribution?

Reference

The Gaussian distribution is part of the Exponential family of distributions, but there are a lot more of them, with the same sort of ease of use, in many cases, and if the person doing the machine learning has a solid grounding in statistics, they can be utilized where appropriate.

Binomial: multiple toss of a coin Bin(n,p): the binomial distribution consists of the probabilities of each of the possible numbers of successes on n trials for independent events that each have a probability of p of
occurring.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Bernoulli: Bin(1,p) = Be(p)
Poisson: Pois(λ)

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

 

What is a confusion matrix?

The confusion matrix is a 2X2 table that contains 4 outputs provided by the binary classifier.

 Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

A data set used for performance evaluation is called a test data set. It should contain the correct labels and predicted labels. The predicted labels will exactly the same if the performance of a binary classifier is perfect. The predicted labels usually match with part of the observed labels in real-world scenarios.
A binary classifier predicts all data instances of a test data set as either positive or negative. This produces four outcomes: TP, FP, TN, FN. Basic measures derived from the confusion matrix:

What is the difference between “long” and “wide” format data?

In the wide-format, a subject’s repeated responses will be in a single row, and each response is in a separate column. In the long-format, each row is a one-time point per subject. You can recognize data in wide format by the fact that columns generally represent groups (variables).

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

difference between “long” and “wide” format data

What do you understand by the term Normal Distribution?

Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell-shaped curve.

Data Science: Normal Distribution

The random variables are distributed in the form of a symmetrical, bell-shaped curve. Properties of Normal Distribution are as follows:

1. Unimodal (Only one mode)
2. Symmetrical (left and right halves are mirror images)
3. Bell-shaped (maximum height (mode) at the mean)
4. Mean, Mode, and Median are all located in the center
5. Asymptotic

What is correlation and covariance in statistics?

Correlation is considered or described as the best technique for measuring and also for estimating the quantitative relationship between two variables. Correlation measures how strongly two variables are related. Given two random variables, it is the covariance between both divided by the product of the two standard deviations of the single variables, hence always between -1 and 1.

correlation and covariance

Covariance is a measure that indicates the extent to which two random variables change in cycle. It explains the systematic relation between a pair of random variables, wherein changes in one variable reciprocal by a corresponding change in another variable.

correlation and covariance in statistics

What is the difference between Point Estimates and Confidence Interval? 

Point Estimation gives us a particular value as an estimate of a population parameter. Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters.

A confidence interval gives us a range of values which is likely to contain the population parameter. The confidence interval is generally preferred, as it tells us how likely this interval is to contain the population parameter. This likeliness or probability is called Confidence Level or Confidence coefficient and represented by 1 − ∝, where ∝ is the level of significance.

What is the goal of A/B Testing?

It is a hypothesis testing for a randomized experiment with two variables A and B.
The goal of A/B Testing is to identify any changes to the web page to maximize or increase the outcome of interest. A/B testing is a fantastic method for figuring out the best online promotional and marketing strategies for your business. It can be used to test everything from website copy to sales emails to search ads. An example of this could be identifying the click-through rate for a banner ad.

What is p-value?

When you perform a hypothesis test in statistics, a p-value can help you determine the strength of your results. p-value is the minimum significance level at which you can reject the null hypothesis. The lower the p-value, the more likely you reject the null hypothesis.

What do you understand by statistical power of sensitivity and how do you calculate it? 

Sensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, Random Forest etc.). Sensitivity = [ TP / (TP +TN)]

 

Why is Re-sampling done?

https://machinelearningmastery.com/statistical-sampling-and-resampling/

  • Sampling is an active process of gathering observations with the intent of estimating a population variable.
  • Resampling is a methodology of economically using a data sample to improve the accuracy and quantify the uncertainty of a population parameter. Resampling methods, in fact, make use of a nested resampling method.

Once we have a data sample, it can be used to estimate the population parameter. The problem is that we only have a single estimate of the population parameter, with little idea of the variability or uncertainty in the estimate. One way to address this is by estimating the population parameter multiple times from our data sample. This is called resampling. Statistical resampling methods are procedures that describe how to economically use available data to estimate a population parameter. The result can be both a more accurate estimate of the parameter (such as taking the mean of the estimates) and a quantification of the uncertainty of the estimate (such as adding a confidence interval).

Resampling methods are very easy to use, requiring little mathematical knowledge. A downside of the methods is that they can be computationally very expensive, requiring tens, hundreds, or even thousands of resamples in order to develop a robust estimate of the population parameter.

The key idea is to resample from the original data — either directly or via a fitted model — to create replicate datasets, from which the variability of the quantiles of interest can be assessed without longwinded and error-prone analytical calculation. Because this approach involves repeating the original data analysis procedure with many replicate sets of data, these are sometimes called computer-intensive methods. Each new subsample from the original data sample is used to estimate the population parameter. The sample of estimated population parameters can then be considered with statistical tools in order to quantify the expected value and variance, providing measures of the uncertainty of the
estimate. Statistical sampling methods can be used in the selection of a subsample from the original sample.

A key difference is that process must be repeated multiple times. The problem with this is that there will be some relationship between the samples as observations that will be shared across multiple subsamples. This means that the subsamples and the estimated population parameters are not strictly identical and independently distributed. This has implications for statistical tests performed on the sample of estimated population parameters downstream, i.e. paired statistical tests may be required. 

Two commonly used resampling methods that you may encounter are k-fold cross-validation and the bootstrap.

  • Bootstrap. Samples are drawn from the dataset with replacement (allowing the same sample to appear more than once in the sample), where those instances not drawn into the data sample may be used for the test set.
  • k-fold Cross-Validation. A dataset is partitioned into k groups, where each group is given the opportunity of being used as a held out test set leaving the remaining groups as the training set. The k-fold cross-validation method specifically lends itself to use in the evaluation of predictive models that are repeatedly trained on one subset of the data and evaluated on a second held-out subset of the data.  

Resampling is done in any of these cases:

  • Estimating the accuracy of sample statistics by using subsets of accessible data or drawing randomly with replacement from a set of data points
  • Substituting labels on data points when performing significance tests
  • Validating models by using random subsets (bootstrapping, cross-validation)

What are the differences between over-fitting and under-fitting?

In statistics and machine learning, one of the most common tasks is to fit a model to a set of training data, so as to be able to make reliable predictions on general untrained data.

In overfitting, a statistical model describes random error or noise instead of the underlying relationship.
Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data.

Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data.
Such a model too would have poor predictive performance.

 

How to combat Overfitting and Underfitting?

To combat overfitting:
1. Add noise
2. Feature selection
3. Increase training set
4. L2 (ridge) or L1 (lasso) regularization; L1 drops weights, L2 no
5. Use cross-validation techniques, such as k folds cross-validation
6. Boosting and bagging
7. Dropout technique
8. Perform early stopping
9. Remove inner layers
To combat underfitting:
1. Add features
2. Increase time of training

What is regularization? Why is it useful?

Regularization is the process of adding tuning parameter (penalty term) to a model to induce smoothness in order to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1 (Lasso – |∝|) or L2 (Ridge – ∝2). The model predictions should then minimize the loss function calculated on the regularized training set.

What Is the Law of Large Numbers? 

It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.

What Are Confounding Variables?

In statistics, a confounder is a variable that influences both the dependent variable and independent variable.

If you are researching whether a lack of exercise leads to weight gain:
lack of exercise = independent variable
weight gain = dependent variable
A confounding variable here would be any other variable that affects both of these variables, such as the age of the subject.

What is Survivorship Bias?

It is the logical error of focusing aspects that support surviving some process and casually overlooking those that did not work because of their lack of prominence. This can lead to wrong conclusions in numerous different means. For example, during a recession you look just at the survived businesses, noting that they are performing poorly. However, they perform better than the rest, which is failed, thus being removed from the time series.

Explain how a ROC curve works?

The ROC curve is a graphical representation of the contrast between true positive rates and false positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity (true positive rate) and false positive rate.

Data Science ROC Curve

What is TF/IDF vectorization?

TF-IDF is short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining.

Data Science TF IDF Vectorization

The TF-IDF value increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.

Python or R – Which one would you prefer for text analytics?

We will prefer Python because of the following reasons:
• Python would be the best option because it has Pandas library that provides easy to use data structures and high-performance data analysis tools.
• R is more suitable for machine learning than just text analysis.
• Python performs faster for all types of text analytics.

How does data cleaning play a vital role in the analysis? 

Data cleaning can help in analysis because:

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

Differentiate between univariate, bivariate and multivariate analysis. 

Univariate analyses are descriptive statistical analysis techniques which can be differentiated based on one variable involved at a given point of time. For example, the pie charts of sales based on territory involve only one variable and can the analysis can be referred to as univariate analysis.

The bivariate analysis attempts to understand the difference between two variables at a time as in a scatterplot. For example, analyzing the volume of sale and spending can be considered as an example of bivariate analysis.

Multivariate analysis deals with the study of more than two variables to understand the effect of variables on the responses.

Explain Star Schema

It is a traditional database schema with a central table. Satellite tables map IDs to physical names or descriptions and can be connected to the central fact table using the ID fields; these tables are known as lookup tables and are principally useful in real-time applications, as they save a lot of memory. Sometimes star schemas involve several layers of summarization to recover information faster.

What is Cluster Sampling?

Cluster sampling is a technique used when it becomes difficult to study the target population spread across a wide area and simple random sampling cannot be applied. Cluster Sample is a probability sample where each sampling unit is a collection or cluster of elements.

For example, a researcher wants to survey the academic performance of high school students in Japan. He can divide the entire population of Japan into different clusters (cities). Then the researcher selects a number of clusters depending on his research through simple or systematic random sampling.

What is Systematic Sampling? 

Systematic sampling is a statistical technique where elements are selected from an ordered sampling frame. In systematic sampling, the list is progressed in a circular manner so once you reach the end of the list, it is progressed from the top again. The best example of systematic sampling is equal probability method.

What are Eigenvectors and Eigenvalues? 

Eigenvectors are used for understanding linear transformations. In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix. Eigenvectors are the directions along which a particular linear transformation acts by flipping, compressing or stretching.
Eigenvalue can be referred to as the strength of the transformation in the direction of eigenvector or the factor by which the compression occurs.

Give Examples where a false positive is important than a false negative?

Let us first understand what false positives and false negatives are:

  • False Positives are the cases where you wrongly classified a non-event as an event a.k.a Type I error
  • False Negatives are the cases where you wrongly classify events as non-events, a.k.a Type II error.

Example 1: In the medical field, assume you have to give chemotherapy to patients. Assume a patient comes to that hospital and he is tested positive for cancer, based on the lab prediction but he actually doesn’t have cancer. This is a case of false positive. Here it is of utmost danger to start chemotherapy on this patient when he actually does not have cancer. In the absence of cancerous cell, chemotherapy will do certain damage to his normal healthy cells and might lead to severe diseases, even cancer.

Example 2: Let’s say an e-commerce company decided to give $1000 Gift voucher to the customers whom they assume to purchase at least $10,000 worth of items. They send free voucher mail directly to 100 customers without any minimum purchase condition because they assume to make at least 20% profit on sold items above $10,000. Now the issue is if we send the $1000 gift vouchers to customers who have not actually purchased anything but are marked as having made $10,000 worth of purchase

Give Examples where a false negative important than a false positive? And vice versa?

Example 1 FN: What if Jury or judge decides to make a criminal go free?

Example 2 FN: Fraud detection.

Example 3 FP: customer voucher use promo evaluation: if many used it and actually if was not true, promo sucks

Give Examples where both false positive and false negatives are equally important? 

In the Banking industry giving loans is the primary source of making money but at the same time if your repayment rate is not good you will not make any profit, rather you will risk huge losses.
Banks don’t want to lose good customers and at the same point in time, they don’t want to acquire bad customers. In this scenario, both the false positives and false negatives become very important to measure.

What is the Difference between a Validation Set and a Test Set?

A Training Set:
• to fit the parameters i.e. weights

A Validation set:
• part of the training set
• for parameter selection
• to avoid overfitting

A Test set:
• for testing or evaluating the performance of a trained machine learning model, i.e. evaluating the
predictive power and generalization.

What is cross-validation?

Reference: k-fold cross validation 

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation. Mainly used in backgrounds where the objective is forecast, and one wants to estimate how accurately a model will accomplish in practice.

Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

The general procedure is as follows:
1. Shuffle the dataset randomly.
2. Split the dataset into k groups
3. For each unique group:
a. Take the group as a hold out or test data set
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model
4. Summarize the skill of the model using the sample of model evaluation scores

Data Science Cross Validation

There is an alternative in Scikit-Learn called Stratified k fold, in which the split is shuffled to make it sure you have a representative sample of each class and a k fold in which you may not have the assurance of it (not good with a very unbalanced dataset).

What is Machine Learning?

Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data. You select a model to train and then manually perform feature extraction. Used to devise complex models and algorithms that lend themselves to a prediction which in commercial use is known as predictive analytics.

What is Supervised Learning? 

Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples.

Algorithms: Support Vector Machines, Regression, Naive Bayes, Decision Trees, K-nearest Neighbor Algorithm and Neural Networks

Example: If you built a fruit classifier, the labels will be “this is an orange, this is an apple and this is a banana”, based on showing the classifier examples of apples, oranges and bananas.

What is Unsupervised learning?

Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.

Algorithms: Clustering, Anomaly Detection, Neural Networks and Latent Variable Models

Example: In the same example, a fruit clustering will categorize as “fruits with soft skin and lots of dimples”, “fruits with shiny hard skin” and “elongated yellow fruits”.

What are the various Machine Learning algorithms?

Machine Learning Algorithms

What is “Naive” in a Naive Bayes?

Reference: Naive Bayes Classifier on Wikipedia

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable y and dependent feature vector X1through Xn:

Machine Learning Algorithms Naive Bayes
Machine Learning Algorithms Naive Bayes

What is PCA (Principal Component Analysis)? When do you use it?

Reference: PCA on wikipedia

Principal component analysis (PCA) is a statistical method used in Machine Learning. It consists in projecting data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension.

The process works as following. We define a matrix A with > rows (the single observations of a dataset – in a tabular format, each single row) and @ columns, our features. For this matrix we construct a variable space with as many dimensions as there are features. Each feature represents one coordinate axis. For each feature, the length has been standardized according to a scaling criterion, normally by scaling to unit variance. It is determinant to scale the features to a common scale, otherwise the features with a greater magnitude will weigh more in determining the principal components. Once plotted all the observations and computed the mean of each variable, that mean will be represented by a point in the center of our plot (the center of gravity). Then, we subtract each observation with the mean, shifting the coordinate system with the center in the origin. The best fitting line resulting is the line that best accounts for the shape of the point swarm. It represents the maximum variance direction in the data. Each observation may be projected onto this line in order to get a coordinate value along the PC-line. This value is known as a score. The next best-fitting line can be similarly chosen from directions perpendicular to the first.
Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components.

Machine Learning Algorithms PCA

PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.

SVM (Support Vector Machine)  algorithm

Reference: SVM on wikipedia

Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of supportvector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So, we
choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. The best hyper plane that divides the data is H3.

  • SVMs are helpful in text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
  • Some methods for shallow semantic parsing are based on support vector machines.
  • Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.
  • Classification of satellite data like SAR data using supervised SVM.
  • Hand-written characters can be recognized using SVM.

What are the support vectors in SVM? 

Machine Learning Algorithms Support Vectors

In the diagram, we see that the sketched lines mark the distance from the classifier (the hyper plane) to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin.

To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, max (0, 1 – yi(w∙ xi − b)). This function is zero if x lies on the correct side of the margin. For data on the wrong side of the margin, the function’s value is proportional to the distance from the margin. 

What are the different kernels in SVM?

There are four types of kernels in SVM.
1. LinearKernel
2. Polynomial kernel
3. Radial basis kernel
4. Sigmoid kernel

What are the most known ensemble algorithms? 

Reference: Ensemble Algorithms

The most popular trees are: AdaBoost, Random Forest, and  eXtreme Gradient Boosting (XGBoost).

AdaBoost is best used in a dataset with low noise, when computational complexity or timeliness of results is not a main concern and when there are not enough resources for broader hyperparameter tuning due to lack of time and knowledge of the user.

Random forests should not be used when dealing with time series data or any other data where look-ahead bias should be avoided, and the order and continuity of the samples need to be ensured. This algorithm can handle noise relatively well, but more knowledge from the user is required to adequately tune the algorithm compared to AdaBoost.

The main advantages of XGBoost is its lightning speed compared to other algorithms, such as AdaBoost, and its regularization parameter that successfully reduces variance. But even aside from the regularization parameter, this algorithm leverages a learning rate (shrinkage) and subsamples from the features like random forests, which increases its ability to generalize even further. However, XGBoost is more difficult to understand, visualize and to tune compared to AdaBoost and random forests. There is a multitude of hyperparameters that can be tuned to increase performance.

What is Deep Learning?

Deep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the neurons in the human brain.

Deep Learning

What is the difference between machine learning and deep learning?

Deep learning & Machine learning: what’s the difference?

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorized in the following four categories.
1. Supervised machine learning,
2. Semi-supervised machine learning,
3. Unsupervised machine learning,
4. Reinforcement learning.

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

Machine Learning vs Deep Learning

• The main difference between deep learning and machine learning is due to the way data is
presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks).

• Machine learning algorithms are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets. However, when the result is incorrect, there is a need to “teach them”. Because machine learning algorithms require bulleted data, they are not suitable for solving complex queries that involve a huge amount of data.

• Deep learning networks do not require human intervention, as multilevel layers in neural
networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes. However, even they can be wrong if the data quality is not good enough.

• Data decides everything. It is the quality of the data that ultimately determines the quality of the result.

• Both of these subsets of AI are somehow connected to data, which makes it possible to represent a certain form of “intelligence.” However, you should be aware that deep learning requires much more data than a traditional machine learning algorithm. The reason for this is that deep learning networks can identify different elements in neural network layers only when more than a million data points interact. Machine learning algorithms, on the other hand, are capable of learning by pre-programmed criteria.

What is the reason for the popularity of Deep Learning in recent times? 

Now although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons:
• The increase in the amount of data generated through various sources
• The growth in hardware resources required to run these models
GPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously

What is reinforcement learning?

Reinforcement Learning allows to take actions to max cumulative reward. It learns by trial and error through reward/penalty system. Environment rewards agent so by time agent makes better decisions.
Ex: robot=agent, maze=environment. Used for complex tasks (self-driving cars, game AI).

RL is a series of time steps in a Markov Decision Process:

1. Environment: space in which RL operates
2. State: data related to past action RL took
3. Action: action taken
4. Reward: number taken by agent after last action
5. Observation: data related to environment: can be visible or partially shadowed

What are Artificial Neural Networks?

Artificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input, so the network generates the best possible result without needing to redesign the output criteria.

Artificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.

Machine Learning Artificial Neural Network

How Are Weights Initialized in a Network?

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

What Is the Cost Function? 

Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions.
The most known one is the mean sum of squared errors.

Machine Learning Cost Function

What Are Hyperparameters?

With neural networks, you’re usually working with hyperparameters once the data is formatted correctly.
A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, batches, etc.).

What Will Happen If the Learning Rate is Set inaccurately (Too Low or Too High)? 

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.
If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

What Is The Difference Between Epoch, Batch, and Iteration in Deep Learning? 

Epoch – Represents one iteration over the entire dataset (everything put into the training model).
Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.
Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).

What Are the Different Layers on CNN?

Reference: Layers of CNN 

Machine Learning Layers of CNN

The Convolutional neural networks are regularized versions of multilayer perceptron (MLP). They were developed based on the working of the neurons of the animal visual cortex.

The objective of using the CNN:

The idea is that you give the computer this array of numbers and it will output numbers that describe the probability of the image being a certain class (.80 for a cat, .15 for a dog, .05 for a bird, etc.). It works similar to how our brain works. When we look at a picture of a dog, we can classify it as such if the picture has identifiable features such as paws or 4 legs. In a similar way, the computer is able to perform image classification by looking for low-level features such as edges and curves and then building up to more abstract concepts through a series of convolutional layers. The computer uses low-level features obtained at the initial levels to generate high-level features such as paws or eyes to identify the object.

There are four layers in CNN:
1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.
2. Activation Layer (ReLU Layer) – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map. It follows each convolutional layer.
3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map. Stride = how much you slide, and you get the max of the n x n matrix
4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.

Q60: What Is Pooling on CNN, and How Does It Work?

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

What are Recurrent Neural Networks (RNNs)? 

Reference: RNNs

RNNs are a type of artificial neural networks designed to recognize the pattern from the sequence of data such as Time series, stock market and government agencies etc.

Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a ‘series’ type input with no predetermined size.

Machine Learning RNN

RNNs are designed to take a series of input with no predetermined limit on size. One could ask what’s\ the big deal, I can call a regular NN repeatedly too?

Machine Learning Regular NN

Sure can, but the ‘series’ part of the input means something. A single input item from the series is related to others and likely has an influence on its neighbors. Otherwise it’s just “many” inputs, not a “series” input (duh!).
Recurrent Neural Network remembers the past and its decisions are influenced by what it has learnt from the past. Note: Basic feed forward networks “remember” things too, but they remember things they learnt during training. For example, an image classifier learns what a “1” looks like during training and then uses that knowledge to classify things in production.
While RNNs learn similarly while training, in addition, they remember things learnt from prior input(s) while generating output(s). RNNs can take one or more input vectors and produce one or more output vectors and the output(s) are influenced not just by weights applied on inputs like a regular NN, but also by a “hidden” state vector representing the context based on prior input(s)/output(s). So, the same input could produce a different output depending on previous inputs in the series.

Machine Learning Vanilla NN

In summary, in a vanilla neural network, a fixed size input vector is transformed into a fixed size output vector. Such a network becomes “recurrent” when you repeatedly apply the transformations to a series of given input and produce a series of output vectors. There is no pre-set limitation to the size of the vector. And, in addition to generating the output which is a function of the input and hidden state, we update the hidden state itself based on the input and use it in processing the next input.

What is the role of the Activation Function?

The Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.

Machine Learning libraries for various purposes

Machine Learning Libraries

What is an Auto-Encoder?

Reference: Auto-Encoder

Auto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabeled input which is then encoded to reconstruct the input. 

An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties.
Autoencoders are effectively used for solving many applied problems, from face recognition to acquiring the semantic meaning of words.

Machine Learning Auto_Encoder

What is a Boltzmann Machine?

Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimize the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.

Machine Learning Boltzmann Machine

What Is Dropout and Batch Normalization?

Dropout is a technique of dropping out hidden and visible nodes of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network. It used to avoid overfitting, as it increases the capacity of generalization.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one

Why Is TensorFlow the Most Preferred Library in Deep Learning?

TensorFlow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and PyTorch. TensorFlow supports both CPU and GPU computing devices.

What is Tensor in TensorFlow?

A tensor is a mathematical object represented as arrays of higher dimensions. Think of a n-D matrix. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

What is the Computational Graph?

Everything in a TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates. Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

What is logistic regression?

• Logistic Regression models a function of the target variable as a linear combination of the predictors, then converts this function into a fitted value in the desired range.

• Binary or Binomial Logistic Regression can be understood as the type of Logistic Regression that deals with scenarios wherein the observed outcomes for dependent variables can be only in binary, i.e., it can have only two possible types.

• Multinomial Logistic Regression works in scenarios where the outcome can have more than two possible types – type A vs type B vs type C – that are not in any particular order.

No alternative text description for this image

No alternative text description for this image

Credit:

How is logistic regression done? 

Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).

Explain the steps in making a decision tree. 

1. Take the entire data set as input
2. Calculate entropy of the target variable, as well as the predictor attributes
3. Calculate your information gain of all attributes (we gain information on sorting different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch is finalized
For example, let’s say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:

Machine Learning Decision Tree

It is clear from the decision tree that an offer is accepted if:
• Salary is greater than $50,000
• The commute is less than an hour
• Coffee is offered

How do you build a random forest model?

A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.

Steps to build a random forest model:

1. Randomly select ; features from a total of = features where  k<< m
2. Among the ; features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for > times to create > number of trees

Differentiate between univariate, bivariate, and multivariate analysis. 

Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.

Machine Learning Univariate Data

The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.

Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.

Bivariate data

Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.

Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.

Example: data for house price prediction
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.

What are the feature selection methods used to select the right variables?

There are two main methods for feature selection.
Filter Methods
This involves:
• Linear discrimination analysis
• ANOVA
• Chi-Square
The best analogy for selecting features is “bad data in, bad answer out.” When we’re limiting or selecting the features, it’s all about cleaning up the data coming in.

Wrapper Methods
This involves:
• Forward Selection: We test one feature at a time and keep adding them until we get a good fit
• Backward Selection: We test all the features and start removing them to see what works
better
• Recursive Feature Elimination: Recursively looks through all the different features and how they pair together

Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.

You are given a data set consisting of variables with more than 30 percent missing values. How will you deal with them? 

If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.

For smaller data sets, we can impute missing values with the mean, median, or average of the rest of the data using pandas data frame in python. There are different ways to do so, such as: df.mean(), df.fillna(mean)

Other option of imputation is using KNN for numeric or classification values (as KNN just uses k closest values to impute the missing value).

How will you calculate the Euclidean distance in Python?

plot1 = [1,3]

plot2 = [2,5]

The Euclidean distance can be calculated as follows:

euclidean_distance = sqrt((plot1[0]-plot2[0])**2 + (plot1[1]- plot2[1])**2)

What are dimensionality reduction and its benefits? 

Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.

This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there’s no point in storing a value in two different units (meters and inches).

How should you maintain a deployed model?

The steps to maintain a deployed model are (CREM):

1. Monitor: constant monitoring of all models is needed to determine their performance accuracy.
When you change something, you want to figure out how your changes are going to affect things.
This needs to be monitored to ensure it’s doing what it’s supposed to do.
2. Evaluate: evaluation metrics of the current model are calculated to determine if a new algorithm is needed.
3. Compare: the new models are compared to each other to determine which model performs the best.
4. Rebuild: the best performing model is re-built on the current state of data.

How can a time-series data be declared as stationery?

  1. The mean of the series should not be a function of time.
Machine Learning Stationery Time Series Data: Mean
  1. The variance of the series should not be a function of time. This property is known as homoscedasticity.
Machine Learning Stationery Time Series Data: Variance
  1. The covariance of the i th term and the (i+m) th term should not be a function of time.
Machine Learning Stationery Time Series Data: CoVariance

‘People who bought this also bought…’ recommendations seen on Amazon are a result of which algorithm?

The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.

What is a Generative Adversarial Network?

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine. The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.
The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Machine Learning GAN illustration

• There is a noise vector coming into the forger who is generating fake wine.
• Here the forger acts as a Generator.
• The shop owner acts as a Discriminator.
• The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine.
The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:
1. Generator
2. Discriminator

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images. The ultimate aim is to make the discriminator learn to identify real and fake images.

You are given a dataset on cancer detection. You have built a classification model and achieved an accuracy of 96 percent. Why shouldn’t you be happy with your model performance? What can you do about it?

Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection and can greatly improve a patient’s prognosis.

Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.

We want to predict the probability of death from heart disease based on three risk factors: age, gender, and blood cholesterol level. What is the most appropriate algorithm for this case?

The most appropriate algorithm for this case is logistic regression.

After studying the behavior of a population, you have identified four specific individual types that are valuable to your study. You would like to find all users who are most similar to each individual type. Which algorithm is most appropriate for this study? 

As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering is the most appropriate algorithm for this study.

You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be relevant. What else must be true? 

{grape, apple} must be a frequent itemset.

Your organization has a website where visitors randomly receive one of two coupons. It is also possible that visitors to the website will not receive a coupon. You have been asked to determine if offering a coupon to website visitors has any impact on their purchase decisions. Which analysis method should you use?

One-way ANOVA: in statistics, one-way analysis of variance is a technique that can be used to compare means of two or more samples. This technique can be used only for numerical response data, the “Y”, usually one variable, and numerical or categorical input data, the “X”, always one variable, hence “oneway”.
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit
theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.

What are the feature vectors?

A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that’s easy to analyze.

What is root cause analysis?

Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.

Do gradient descent methods always converge to similar points?

They do not, because in some cases, they reach a local minimum or a local optimum point. You would not reach the global optimum point. This is governed by the data and the starting conditions.

 In your choice of language, write a program that prints the numbers ranging from one to 50. But for multiples of three, print “Fizz” instead of the number and for the multiples of five, print “Buzz.” For numbers which are multiples of both three and five, print “FizzBuzz.”

Python Fibonacci algorithm

What are the different Deep Learning Frameworks?

PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.
TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team.
Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.

Data Sciences and Data Mining Glossary

Credit: Dr. Matthew North
Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Archived Data: Data which have been copied out of a live production database and into a data warehouse or other permanent system where they can be accessed and analyzed, but not by primary operational business systems.

Association Rules: A data mining methodology which compares attributes in a data set across all observations to identify areas where two or more attributes are frequently found together. If their frequency of coexistence is high enough throughout the data set, the association of those attributes can be said to be a rule.

Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values.

Binomial: A data type for any set of values that is limited to one of two numeric options.

Binominal: In RapidMiner, the data type binominal is used instead of binomial, enabling both numerical and character-based sets of values that are limited to one of two options.

Business Understanding: See Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Case Sensitive: A situation where a computer program recognizes the uppercase version of a letter or word as being different from the lowercase version of the same letter or word.

Classification: One of the two main goals of conducting data mining activities, with the other being prediction. Classification creates groupings in a data set based on the similarity of the observations’ attributes. Some data mining methodologies, such as decision trees, can predict an observation’s classification.

Code: Code is the result of a computer worker’s work. It is a set of instructions, typed in a specific grammar and syntax, that a computer can understand and execute. According to Lawrence Lessig, it is one of four methods humans can use to set and control boundaries for behavior when interacting with computer systems.

Coefficient: In data mining, a coefficient is a value that is calculated based on the values in a data set that can be used as a multiplier or as an indicator of the relative strength of some attribute or component in a data mining model.

Column: See Attribute. In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Comma Separated Values (CSV): A common text-based format for data sets where the divisions between attributes (columns of data) are indicated by commas. If commas occur naturally in some of the values in the data set, a CSV file will misunderstand these to be attribute separators, leading to misalignment of attributes.

Conclusion: See Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Confidence (Alpha) Level: A value, usually 5% or 0.05, used to test for statistical significance in some data mining methods. If statistical significance is found, a data miner can say that there is a 95% likelihood that a calculated or predicted value is not a false positive.

Confidence Percent: In predictive data mining, this is the percent of calculated confidence that the model has calculated for one or more possible predicted values. It is a measure for the likelihood of false positives in predictions. Regardless of the number of possible predicted values, their collective confidence percentages will always total to 100%.

Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Correlation: A statistical measure of the strength of affinity, based on the similarity of observational values, of the attributes in a data set. These can be positive (as one attribute’s values go up or down, so too does the correlated attribute’s values); or negative (correlated attributes’ values move in opposite directions). Correlations are indicated by coefficients which fall on a scale between -1 (complete negative correlation) and 1 (complete positive correlation), with 0 indicating no correlation at all between two attributes.

CRISP-DM: An acronym for Cross-Industry Standard Process for Data Mining. This process was jointly developed by several major multi-national corporations around the turn of the new millennium in order to standardize the approach to mining data. It is comprised of six cyclical steps: Business (Organizational) Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, Deployment.

Cross-validation: A method of statistically evaluating a training data set for its likelihood of producing false positives in a predictive data mining model.

Data: Data are any arrangement and compilation of facts. Data may be structured (e.g. arranged in columns (attributes) and rows (observations)), or unstructured (e.g. paragraphs of text, computer log file).

Data Analysis: The process of examining data in a repeatable and structured way in order to extract meaning, patterns or messages from a set of data.

Data Mart: A location where data are stored for easy access by a broad range of people in an organization. Data in a data mart are generally archived data, enabling analysis in a setting that does not impact live operations.

Data Mining: A computational process of analyzing data sets, usually large in nature, using both statistical and logical methods, in order to uncover hidden, previously unknown, and interesting patterns that can inform organizational decision making.

Data Preparation: The third in the six steps of CRISP-DM. At this stage, the data miner ensures that the data to be mined are clean and ready for mining. This may include handling outliers or other inconsistent data, dealing with missing values, reducing attributes or observations, setting attribute roles for modeling, etc.

Data Set: Any compilation of data that is suitable for analysis.

Data Type: In a data set, each attribute is assigned a data type based on the kind of data stored in the attribute. There are many data types which can be generalized into one of three areas: Character (Text) based; Numeric; and Date/Time. Within these categories, RapidMiner has several data types. For example, in the Character area, RapidMiner has Polynominal, Binominal, etc.; and in the Numeric area it has Real, Integer, etc.

Data Understanding: The second in the six steps of CRISP-DM. At this stage, the data miner seeks out sources of data in the organization, and works to collect, compile, standardize, define and document the data. The data miner develops a comprehension of where the data have come from, how they were collected and what they mean.

Data Warehouse: A large-scale repository for archived data which are available for analysis. Data in a data warehouse are often stored in multiple formats (e.g. by week, month, quarter and year), facilitating large scale analyses at higher speeds. The data warehouse is populated by extracting data from operational systems so that analyses do not interfere with live business operations.

Database: A structured organization of facts that is organized such that the facts can be reliably and repeatedly accessed. The most common type of database is a relational database, in which facts (data) are arranged in tables of columns and rows. The data are then accessed using a query language, usually SQL (Structured Query Language), in order to extract meaning from the tables.

Decision Tree: A data mining methodology where leaves and nodes are generated to construct a predictive tree, whereby a data miner can see the attributes which are most predictive of each possible outcome in a target (label) attribute.

Denormalization: The process of removing relational organization from data, reintroducing redundancy into the data, but simultaneously eliminating the need for joins in a relational database, enabling faster querying.

Dependent Variable (Attribute): The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Deployment: The sixth and final of the six steps of CRISP-DM. At this stage, the data miner takes the results of data mining activities and puts them into practice in the organization. The data miner watches closely and collects data to determine if the deployment is successful and ethical. Deployment can happen in stages, such as through pilot programs before a full-scale roll out.

Descartes’ Rule of Change: An ethical framework set forth by Rene Descartes which states that if an action cannot be taken repeatedly, it cannot be ethically taken even once.

Design Perspective: The view in RapidMiner where a data miner adds operators to a data mining stream, sets those operators’ parameters, and runs the model.

Discriminant Analysis: A predictive data mining model which attempts to compare the values of all observations across all attributes and identify where natural breaks occur from one category to another, and then predict which category each observation in the data set will fall into.

Ethics: A set of moral codes or guidelines that an individual develops to guide his or her decision making in order to make fair and respectful decisions and engage in right actions. Ethical standards are higher than legally required minimums.

Evaluation: The fifth of the six steps of CRISP-DM. At this stage, the data miner reviews the results of the data mining model, interprets results and determines how useful they are. He or she may also conduct an investigation into false positives or other potentially misleading results.

False Positive: A predicted value that ends up not being correct.

Field: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Frequency Pattern: A recurrence of the same, or similar, observations numerous times in a single data set.

Fuzzy Logic: A data mining concept often associated with neural networks where predictions are made using a training data set, even though some uncertainty exists regarding the data and a model’s predictions.

Gain Ratio: One of several algorithms used to construct decision tree models.

Gini Index: An algorithm created by Corrodo Gini that can be used to generate decision tree models.

Heterogeneity: In statistical analysis, this is the amount of variety found in the values of an attribute.

Inconsistent Data: These are values in an attribute in a data set that are out-of-the-ordinary among the whole set of values in that attribute. They can be statistical outliers, or other values that simply don’t make sense in the context of the ‘normal’ range of values for the attribute. They are generally replaced or remove during the Data Preparation phase of CRISP-DM.

Independent Variable (Attribute): These are attributes that act on the dependent attribute (the target, or label). They are used to help predict the label in a predictive model.

Jittering: The process of adding a small, random decimal to discrete values in a data set so that when they are plotted in a scatter plot, they are slightly apart from one another, enabling the analyst to better see clustering and density.

Join: The process of connecting two or more tables in a relational database together so that their attributes can be accessed in a single query, such as in a view.

Kant’s Categorical Imperative: An ethical framework proposed by Immanuel Kant which states that if everyone cannot ethically take some action, then no one can ethically take that action.

k-Means Clustering: A data mining methodology that uses the mean (average) values of the attributes in a data set to group each observation into a cluster of other observations whose values are most similar to the mean for that cluster.

Label: In RapidMiner, this is the role that must be set in order to use an attribute as the dependent, or target, attribute in a predictive model.

Laws: These are regulatory statutes which have associated consequences that are established and enforced by a governmental agency. According to Lawrence Lessig, these are one of the four methods for establishing boundaries to define and regulate social behavior.

Leaf: In a decision tree data mining model, this is the terminal end point of a branch, indicating the predicted outcome for observations whose values follow that branch of the tree.

Linear Regression: A predictive data mining method which uses the algebraic formula for calculating the slope of a line in order to predict where a given observation will likely fall along that line.

Logistic Regression: A predictive data mining method which uses a quadratic formula to predict one of a set of possible outcomes, along with a probability that the prediction will be the actual outcome.

Markets: A socio-economic construct in which peoples’ buying, selling, and exchanging behaviors define the boundaries of acceptable or unacceptable behavior. Lawrence Lessig offers this as one of four methods for defining the parameters of appropriate behavior.

Mean: See Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values. 

Median: With the Mean and Mode, this is one of three generally used Measures of Central Tendency. It is an arithmetic way of defining what ‘normal’ looks like in a numeric attribute. It is calculated by rank ordering the values in an attribute and finding the one in the middle. If there are an even number of observations, the two in the middle are averaged to find the median.

Meta Data: These are facts that describe the observational values in an attribute. Meta data may include who collected the data, when, why, where, how, how often; and usually include some descriptive statistics such as the range, average, standard deviation, etc.

Missing Data: These are instances in an observation where one or more attributes does not have a value. It is not the same as zero, because zero is a value. Missing data are like Null values in a database, they are either unknown or undefined. These are usually replaced or removed during the Data Preparation phase of CRISP-DM.

Mode: With Mean and Median, this is one of three common Measures of Central Tendency. It is the value in an attribute which is the most common. It can be numerical or text. If an attribute contains two or more values that appear an equal number of times and more than any other values, then all are listed as the mode, and the attribute is said to be Bimodal or Multimodal.

Model: A computer-based representation of real-life events or activities, constructed upon the basis of data which represent those events.

Name (Attribute): This is the text descriptor of each attribute in a data set. In RapidMiner, the first row of an imported data set should be designated as the attribute name, so that these are not interpreted as the first observation in the data set.

Neural Network: A predictive data mining methodology which tries to mimic human brain processes by comparing the values of all attributes in a data set to one another through the use of a hidden layer of nodes. The frequencies with which the attribute values match, or are strongly similar, create neurons which become stronger at higher frequencies of similarity.

n-Gram: In text mining, this is a combination of words or word stems that represent a phrase that may have more meaning or significance that would the single word or stem.

Node: A terminal or mid-point in decision trees and neural networks where an attribute branches or forks away from other terminal or branches because the values represented at that point have become significantly different from all other values for that attribute.

Normalization: In a relational database, this is the process of breaking data out into multiple related tables in order to reduce redundancy and eliminate multivalued dependencies.

Null: The absence of a value in a database. The value is unrecorded, unknown, or undefined. See Missing Values.

Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Online Analytical Processing (OLAP): A database concept where data are collected and organized in a way that facilitates analysis, rather than practical, daily operational work. Evaluating data in a data warehouse is an example of OLAP. The underlying structure that collects and holds the data makes analysis faster, but would slow down transactional work.

Online Transaction Processing (OLTP): A database concept where data are collected and organized in a way that facilitates fast and repeated transactions, rather than broader analytical work. Scanning items being purchased at a cash register is an example of OLTP. The underlying structure that collects and holds the data makes transactions faster, but would slow down analysis.

Operational Data: Data which are generated as a result of day-to-day work (e.g. the entry of work orders for an electrical service company).

Operator: In RapidMiner, an operator is any one of more than 100 tools that can be added to a data mining stream in order to perform some function. Functions range from adding a data set, to setting an attribute’s role, to applying a modeling algorithm. Operators are connected into a stream by way of ports connected by splines.

Organizational Data: These are data which are collected by an organization, often in aggregate or summary format, in order to address a specific question, tell a story, or answer a specific question. They may be constructed from Operational Data, or added to through other means such as surveys, questionnaires or tests.

Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Parameters: In RapidMiner, these are the settings that control values and thresholds that an operator will use to perform its job. These may be the attribute name and role in a Set Role operator, or the algorithm the data miner desires to use in a model operator.

Port: The input or output required for an operator to perform its function in RapidMiner. These are connected to one another using splines.

Prediction: The target, or label, or dependent attribute that is generated by a predictive model, usually for a scoring data set in a model.

Premise: See Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Privacy: The concept describing a person’s right to be let alone; to have information about them kept away from those who should not, or do not need to, see it. A data miner must always respect and safeguard the privacy of individuals represented in the data he or she mines.

Professional Code of Conduct: A helpful guide or documented set of parameters by which an individual in a given profession agrees to abide. These are usually written by a board or panel of experts and adopted formally by a professional organization.

Query: A method of structuring a question, usually using code, that can be submitted to, interpreted, and answered by a computer.

Record: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Relational Database: A computerized repository, comprised of entities that relate to one another through keys. The most basic and elemental entity in a relational database is the table, and tables are made up of attributes. One or more of these attributes serves as a key that can be matched (or related) to a corresponding attribute in another table, creating the relational effect which reduces data redundancy and eliminates multivalued dependencies.

Repository: In RapidMiner, this is the place where imported data sets are stored so that they are accessible for modeling.

Results Perspective: The view in RapidMiner that is seen when a model has been run. It is usually comprised of two or more tabs which show meta data, data in a spreadsheet-like view, and predictions and model outcomes (including graphical representations where applicable).

Role (Attribute): In a data mining model, each attribute must be assigned a role. The role is the part the attribute plays in the model. It is usually equated to serving as an independent variable (regular), or dependent variable (label).

Row: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Sample: A subset of an entire data set, selected randomly or in a structured way. This usually reduces a data set down, allowing models to be run faster, especially during development and proof-of-concept work on a model.

Scoring Data: A data set with the same attributes as a training data set in a predictive model, with the exception of the label. The training data set, with the label defined, is used to create a predictive model, and that model is then applied to a scoring data set possessing the same attributes in order to predict the label for each scoring observation.

Social Norms: These are the sets of behaviors and actions that are generally tolerated and found to be acceptable in a society. According to Lawrence Lessig, these are one of four methods of defining and regulating appropriate behavior.

Spline: In RapidMiner, these lines connect the ports between operators, creating the stream of a data mining model.

Standard Deviation: One of the most common statistical measures of how dispersed the values in an attribute are. This measure can help determine whether or not there are outliers (a common type of inconsistent data) in a data set.

Standard Operating Procedures: These are organizational guidelines that are documented and shared with employees which help to define the boundaries for appropriate and acceptable behavior in the business setting. They are usually created and formally adopted by a group of leaders in the organization, with input from key stakeholders in the organization.

Statistical Significance: In statistically-based data mining activities, this is the measure of whether or not the model has yielded any results that are mathematically reliable enough to be used. Any model lacking statistical significance should not be used in operational decision making.

Stemming: In text mining, this is the process of reducing like-terms down into a single, common token (e.g. country, countries, country’s, countryman, etc. → countr).

Stopwords: In text mining, these are small words that are necessary for grammatical correctness, but which carry little meaning or power in the message of the text being mined. These are often articles, prepositions or conjunctions, such as ‘a’, ‘the’, ‘and’, etc., and are usually removed in the Process Document operator’s sub-process.

Stream: This is the string of operators in a data mining model, connected through the operators’ ports via splines, that represents all actions that will be taken on a data set in order to mine it.

Structured Query Language (SQL): The set of codes, reserved keywords and syntax defined by the American National Standards Institute used to create, manage and use relational databases.

Sub-process: In RapidMiner, this is a stream of operators set up to apply a series of actions to all inputs connected to the parent operator.

Support Percent: In an association rule data mining model, this is the percent of the time that when the antecedent is found in an observation, the consequent is also found. Since this is calculated as the number of times the two are found together divided by the total number of they could have been found together, the Support Percent is the same for reciprocal rules.

Table: In data collection, a table is a grid of columns and rows, where in general, the columns are individual attributes in the data set, and the rows are observations across those attributes. Tables are the most elemental entity in relational databases.

Target Attribute: See Label; Dependent Variable: The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Technology: Any tool or process invented by mankind to do or improve work.

Text Mining: The process of data mining unstructured text-based data such as essays, news articles, speech transcripts, etc. to discover patterns of word or phrase usage to reveal deeper or previously unrecognized meaning.

Token (Tokenize): In text mining, this is the process of turning words in the input document(s) into attributes that can be mined.

Training Data: In a predictive model, this data set already has the label, or dependent variable defined, so that it can be used to create a model which can be applied to a scoring data set in order to generate predictions for the latter.

Tuple: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Variable: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

View: A type of pseudo-table in a relational database which is actually a named, stored query. This query runs against one or more tables, retrieving a defined number of attributes that can then be referenced as if they were in a table in the database. Views can limit users’ ability to see attributes to only those that are relevant and/or approved for those users to see. They can also speed up the query process because although they may contain joins, the key columns for the joins can be indexed and cached, making the view’s query run faster than it would if it were not stored as a view. Views can be useful in data mining as data miners can be given read-only access to the view, upon which they can build data mining models, without having to have broader administrative rights on the database itself.

What is the Central Limit Theorem and why is it important?

An Introduction to the Central Limit Theorem

Answer: Suppose that we are interested in estimating the average height among all people. Collecting data for every person in the world is impractical, bordering on impossible. While we can’t obtain a height measurement from everyone in the population, we can still sample some people. The question now becomes, what can we say about the average height of the entire population given a single sample.
The Central Limit Theorem addresses this question exactly. Formally, it states that if we sample from a population using a sufficiently large sample size, the mean of the samples (also known as the sample population) will be normally distributed (assuming true random sampling), the mean tending to the mean of the population and variance equal to the variance of the population divided by the size of the sampling.
What’s especially important is that this will be true regardless of the distribution of the original population.

Central Limit Theorem
Central Limit Theorem: Population Distribution

As we can see, the distribution is pretty ugly. It certainly isn’t normal, uniform, or any other commonly known distribution. In order to sample from the above distribution, we need to define a sample size, referred to as N. This is the number of observations that we will sample at a time. Suppose that we choose
N to be 3. This means that we will sample in groups of 3. So for the above population, we might sample groups such as [5, 20, 41], [60, 17, 82], [8, 13, 61], and so on.
Suppose that we gather 1,000 samples of 3 from the above population. For each sample, we can compute its average. If we do that, we will have 1,000 averages. This set of 1,000 averages is called a sampling distribution, and according to Central Limit Theorem, the sampling distribution will approach a normal distribution as the sample size N used to produce it increases. Here is what our sample distribution looks like for N = 3.

Simple Mean Distribution with N=3
Simple Mean Distribution with N=3

As we can see, it certainly looks uni-modal, though not necessarily normal. If we repeat the same process with a larger sample size, we should see the sampling distribution start to become more normal. Let’s repeat the same process again with N = 10. Here is the sampling distribution for that sample size.

Sample Mean Distribution with N = 10
Sample Mean Distribution with N = 10

Credit: Steve Nouri

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

The Best Medium-Hard Data Analyst SQL Interview Questions

compiled by Google Data Analyst Zachary Thomas!

The Best Medium-Hard Data Analyst SQL Interview Questions

Self-Join Practice Problems: MoM Percent Change

Context: Oftentimes it’s useful to know how much a key metric, such as monthly active users, changes between months.
Say we have a table logins in the form:

SQL Self-Join Practice Mom Percent Change

Task: Find the month-over-month percentage change for monthly active users (MAU).

Solution:
(This solution, like other solution code blocks you will see in this doc, contains comments about SQL syntax that may differ between flavors of SQL or other comments about the solutions as listed)

SQL MoM Solution2

Tree Structure Labeling with SQL

Context: Say you have a table tree with a column of nodes and a column corresponding parent nodes

Task: Write SQL such that we label each node as a “leaf”, “inner” or “Root” node, such that for the nodes above we get:

A solution which works for the above example will receive full credit, although you can receive extra credit for providing a solution that is generalizable to a tree of any depth (not just depth = 2, as is the case in the example above).

Solution: This solution works for the example above with tree depth = 2, but is not generalizable beyond that.

An alternate solution, that is generalizable to any tree depth:
Acknowledgement: this more generalizable solution was contributed by Fabian Hofmann

An alternate solution, without explicit joins:
Acknowledgement: William Chargin on 5/2/20 noted that WHERE parent IS NOT NULL is needed to make this solution return Leaf instead of NULL.

Retained Users Per Month with SQL

Acknowledgement: this problem is adapted from SiSense’s “Using Self Joins to Calculate Your Retention, Churn, and Reactivation Metrics” blog post

PART 1:
Context: Say we have login data in the table logins:

Task: Write a query that gets the number of retained users per month. In this case, retention for a given month is defined as the number of users who logged in that month who also logged in the immediately previous month.

Solution:

PART 2:

Task: Now we’ll take retention and turn it on its head: Write a query to find how many users last month did not come back this month. i.e. the number of churned users

Solution:

Note that there are solutions to this problem that can use LEFT or RIGHT joins.

PART 3:
Context: You now want to see the number of active users this month who have been reactivated — in other words, users who have churned but this month they became active again. Keep in mind a user can reactivate after churning before the previous month. An example of this could be a user active in February (appears in logins), no activity in March and April, but then active again in May (appears in logins), so they count as a reactivated user for May .

Task: Create a table that contains the number of reactivated users per month.

Solution:

Cumulative Sums with SQL

Acknowledgement: This problem was inspired by Sisense’s “Cash Flow modeling in SQL” blog post
Context: Say we have a table transactions in the form:

Where cash_flow is the revenues minus costs for each day.

Task: Write a query to get cumulative cash flow for each day such that we end up with a table in the form below:

Solution using a window function (more effcient):

Alternative Solution (less efficient):

Rolling Averages with SQL

Acknowledgement: This problem is adapted from Sisense’s “Rolling Averages in MySQL and SQL Server” blog post
Note: there are different ways to compute rolling/moving averages. Here we’ll use a preceding average which means that the metric for the 7th day of the month would be the average of the preceding 6 days and that day itself.
Context: Say we have table signups in the form:

Task: Write a query to get 7-day rolling (preceding) average of daily sign ups

Solution1:

Solution2: (using windows, more efficient)

Multiple Join Conditions in SQL

Acknowledgement: This problem was inspired by Sisense’s “Analyzing Your Email with SQL” blog post
Context: Say we have a table emails that includes emails sent to and from zach@g.com:

Task: Write a query to get the response time per email (id) sent to zach@g.com . Do not include ids that did not receive a response from zach@g.com. Assume each email thread has a unique subject. Keep in mind a thread may have multiple responses back-and-forth between zach@g.com and another email address.

Solution:

SQL Window Function Practice Problems

#1: Get the ID with the highest value
Context: Say we have a table salaries with data on employee salary and department in the following format:

Task: Write a query to get the empno with the highest salary. Make sure your solution can handle ties!

#2: Average and rank with a window function (multi-part)

PART 1:
Context: Say we have a table salaries in the format:

Task: Write a query that returns the same table, but with a new column that has average salary per depname. We would expect a table in the form:

Solution:

PART 2:
Task: Write a query that adds a column with the rank of each employee based on their salary within their department, where the employee with the highest salary gets the rank of 1. We would expect a table in the form:

Solution:

Predictive Modelling Questions

Source:  datasciencehandbook.me

 

1-  (Given a Dataset) Analyze this dataset and give me a model that can predict this response variable. 

2-  What could be some issues if the distribution of the test data is significantly different than the distribution of the training data?

3-  What are some ways I can make my model more robust to outliers?

4-  What are some differences you would expect in a model that minimizes squared error, versus a model that minimizes absolute error? In which cases would each error metric be appropriate?

5- What error metric would you use to evaluate how good a binary classifier is? What if the classes are imbalanced? What if there are more than 2 groups?

6-  What are various ways to predict a binary response variable? Can you compare two of them and tell me when one would be more appropriate? What’s the difference between these? (SVM, Logistic Regression, Naive Bayes, Decision Tree, etc.)

7-  What is regularization and where might it be helpful? What is an example of using regularization in a model?

8-  Why might it be preferable to include fewer predictors over many?

9-  Given training data on tweets and their retweets, how would you predict the number of retweets of a given tweet after 7 days after only observing 2 days worth of data?

10-  How could you collect and analyze data to use social media to predict the weather?

11- How would you construct a feed to show relevant content for a site that involves user interactions with items?

12- How would you design the people you may know feature on LinkedIn or Facebook?

13- How would you predict who someone may want to send a Snapchat or Gmail to?

14- How would you suggest to a franchise where to open a new store?

15- In a search engine, given partial data on what the user has typed, how would you predict the user’s eventual search query?

16- Given a database of all previous alumni donations to your university, how would you predict which recent alumni are most likely to donate?

17- You’re Uber and you want to design a heatmap to recommend to drivers where to wait for a passenger. How would you approach this?

18- How would you build a model to predict a March Madness bracket?

19- You want to run a regression to predict the probability of a flight delay, but there are flights with delays of up to 12 hours that are really messing up your model. How can you address this?

 

Data Analysis Interview Questions

Source:  datasciencehandbook.me

1- (Given a Dataset) Analyze this dataset and tell me what you can learn from it.

2- What is R2? What are some other metrics that could be better than R2 and why?

3- What is the curse of dimensionality?

4- Is more data always better?

5- What are advantages of plotting your data before performing analysis?

6- How can you make sure that you don’t analyze something that ends up meaningless?

7- What is the role of trial and error in data analysis? What is the the role of making a hypothesis before diving in?

8- How can you determine which features are the most important in your model?

9- How do you deal with some of your predictors being missing?

10- You have several variables that are positively correlated with your response, and you think combining all of the variables could give you a good prediction of your response. However, you see that in the multiple linear regression, one of the weights on the predictors is negative. What could be the issue?

11- Let’s say you’re given an unfeasible amount of predictors in a predictive modeling task. What are some ways to make the prediction more feasible?

12- Now you have a feasible amount of predictors, but you’re fairly sure that you don’t need all of them. How would you perform feature selection on the dataset?

13- Your linear regression didn’t run and communicates that there are an infinite number of best estimates for the regression coefficients. What could be wrong?

14- You run your regression on different subsets of your data, and find that in each subset, the beta value for a certain variable varies wildly. What could be the issue here?

15- What is the main idea behind ensemble learning? If I had many different models that predicted the same response variable, what might I want to do to incorporate all of the models? Would you expect this to perform better than an individual model or worse?

16- Given that you have wifi data in your office, how would you determine which rooms and areas are underutilized and overutilized?

17- How could you use GPS data from a car to determine the quality of a driver?

18- Given accelerometer, altitude, and fuel usage data from a car, how would you determine the optimum acceleration pattern to drive over hills?

19- Given position data of NBA players in a season’s games, how would you evaluate a basketball player’s defensive ability?

20- How would you quantify the influence of a Twitter user?

21- Given location data of golf balls in games, how would construct a model that can advise golfers where to aim?

22- You have 100 mathletes and 100 math problems. Each mathlete gets to choose 10 problems to solve. Given data on who got what problem correct, how would you rank the problems in terms of difficulty?

23- You have 5000 people that rank 10 sushis in terms of saltiness. How would you aggregate this data to estimate the true saltiness rank in each sushi?

24-Given data on congressional bills and which congressional representatives co-sponsored the bills, how would you determine which other representatives are most similar to yours in voting behavior? How would you evaluate who is the most liberal? Most republican? Most bipartisan?

25- How would you come up with an algorithm to detect plagiarism in online content?

26- You have data on all purchases of customers at a grocery store. Describe to me how you would program an algorithm that would cluster the customers into groups. How would you determine the appropriate number of clusters to include?

27- Let’s say you’re building the recommended music engine at Spotify to recommend people music based on past listening history. How would you approach this problem?

28- Explain how boosted tree models work in simple language.

29- What sort of data sampling techniques would you use for a low signal temporal classification problem?

30- How would you deal with categorical variables and what considerations would you keep in mind?

31- How would you identify leakage in your machine learning model?

32- How would you apply a machine learning model in a live experiment?

33- What is difference between sensitivity, precision and recall? When would you use these over accuracy, name a few situations

34- What’s the importance of train, val, test splits and how would you split or create your dataset – how would this impact your model metrics?

35- What are some simple ways to optimise your model and how would you know you’ve reached a stable and performant model?

Statistical Inference Interview Questions

Source:  datasciencehandbook.me

1- In an A/B test, how can you check if assignment to the various buckets was truly random?

2- What might be the benefits of running an A/A test, where you have two buckets who are exposed to the exact same product?

3- What would be the hazards of letting users sneak a peek at the other bucket in an A/B test?

4- What would be some issues if blogs decide to cover one of your experimental groups?

5- How would you conduct an A/B test on an opt-in feature?

6- How would you run an A/B test for many variants, say 20 or more?

7- How would you run an A/B test if the observations are extremely right-skewed?

8- I have two different experiments that both change the sign-up button to my website. I want to test them at the same time. What kinds of things should I keep in mind?

9- What is a p-value? What is the difference between type-1 and type-2 error?

10- You are AirBnB and you want to test the hypothesis that a greater number of photographs increases the chances that a buyer selects the listing. How would you test this hypothesis?

11- How would you design an experiment to determine the impact of latency on user engagement?

12- What is maximum likelihood estimation? Could there be any case where it doesn’t exist?

13- What’s the difference between a MAP, MOM, MLE estimator? In which cases would you want to use each?

14- What is a confidence interval and how do you interpret it?

15- What is unbiasedness as a property of an estimator? Is this always a desirable property when performing inference? What about in data analysis or predictive modeling?

Product Metric Interview Questions

Source:  datasciencehandbook.me

1- What would be good metrics of success for an advertising-driven consumer product? (Buzzfeed, YouTube, Google Search, etc.) A service-driven consumer product? (Uber, Flickr, Venmo, etc.)

2- What would be good metrics of success for a productivity tool? (Evernote, Asana, Google Docs, etc.) A MOOC? (edX, Coursera, Udacity, etc.)

3- What would be good metrics of success for an e-commerce product? (Etsy, Groupon, Birchbox, etc.) A subscription product? (Netflix, Birchbox, Hulu, etc.) Premium subscriptions? (OKCupid, LinkedIn, Spotify, etc.)

4- What would be good metrics of success for a consumer product that relies heavily on engagement and interaction? (Snapchat, Pinterest, Facebook, etc.) A messaging product? (GroupMe, Hangouts, Snapchat, etc.)

5- What would be good metrics of success for a product that offered in-app purchases? (Zynga, Angry Birds, other gaming apps)

6- A certain metric is violating your expectations by going down or up more than you expect. How would you try to identify the cause of the change?

7- Growth for total number of tweets sent has been slow this month. What data would you look at to determine the cause of the problem?

8- You’re a restaurant and are approached by Groupon to run a deal. What data would you ask from them in order to determine whether or not to do the deal?

9- You are tasked with improving the efficiency of a subway system. Where would you start?

10- Say you are working on Facebook News Feed. What would be some metrics that you think are important? How would you make the news each person gets more relevant?

11- How would you measure the impact that sponsored stories on Facebook News Feed have on user engagement? How would you determine the optimum balance between sponsored stories and organic content on a user’s News Feed?

12- You are on the data science team at Uber and you are asked to start thinking about surge pricing. What would be the objectives of such a product and how would you start looking into this?

13- Say that you are Netflix. How would you determine what original series you should invest in and create?

14- What kind of services would find churn (metric that tracks how many customers leave the service) helpful? How would you calculate churn?

15- Let’s say that you’re are scheduling content for a content provider on television. How would you determine the best times to schedule content?

Programming Questions

Source:  datasciencehandbook.me

1- Write a function to calculate all possible assignment vectors of 2n users, where n users are assigned to group 0 (control), and n users are assigned to group 1 (treatment).

2- Given a list of tweets, determine the top 10 most used hashtags.

3- Program an algorithm to find the best approximate solution to the knapsack problem1 in a given time.

4- Program an algorithm to find the best approximate solution to the travelling salesman problem2 in a given time.

5- You have a stream of data coming in of size n, but you don’t know what n is ahead of time. Write an algorithm that will take a random sample of k elements. Can you write one that takes O(k) space?

6- Write an algorithm that can calculate the square root of a number.

7- Given a list of numbers, can you return the outliers?

8- When can parallelism make your algorithms run faster? When could it make your algorithms run slower?

9- What are the different types of joins? What are the differences between them?

10- Why might a join on a subquery be slow? How might you speed it up?

11- Describe the difference between primary keys and foreign keys in a SQL database.

12- Given a COURSES table with columns course_id and course_name, a FACULTY table with columns faculty_id and faculty_name, and a COURSE_FACULTY table with columns faculty_id and course_id, how would you return a list of faculty who teach a course given the name of a course?

13- Given a IMPRESSIONS table with ad_id, click (an indicator that the ad was clicked), and date, write a SQL query that will tell me the click-through-rate of each ad by month.

14- Write a query that returns the name of each department and a count of the number of employees in each:
EMPLOYEES containing: Emp_ID (Primary key) and Emp_Name
EMPLOYEE_DEPT containing: Emp_ID (Foreign key) and Dept_ID (Foreign key)
DEPTS containing: Dept_ID (Primary key) and Dept_Name

Probability Questions

1- Bobo the amoeba has a 25%, 25%, and 50% chance of producing 0, 1, or 2 offspring, respectively. Each of Bobo’s descendants also have the same probabilities. What is the probability that Bobo’s lineage dies out?

2- In any 15-minute interval, there is a 20% probability that you will see at least one shooting star. What is the probability that you see at least one shooting star in the period of an hour?

3- How can you generate a random number between 1 – 7 with only a die?

4- How can you get a fair coin toss if someone hands you a coin that is weighted to come up heads more often than tails?

5- You have an 50-50 mixture of two normal distributions with the same standard deviation. How far apart do the means need to be in order for this distribution to be bimodal?

6- Given draws from a normal distribution with known parameters, how can you simulate draws from a uniform distribution?

7- A certain couple tells you that they have two children, at least one of which is a girl. What is the probability that they have two girls?

8- You have a group of couples that decide to have children until they have their first girl, after which they stop having children. What is the expected gender ratio of the children that are born? What is the expected number of children each couple will have?

9- How many ways can you split 12 people into 3 teams of 4?

10- Your hash function assigns each object to a number between 1:10, each with equal probability. With 10 objects, what is the probability of a hash collision? What is the expected number of hash collisions? What is the expected number of hashes that are unused.

11- You call 2 UberX’s and 3 Lyfts. If the time that each takes to reach you is IID, what is the probability that all the Lyfts arrive first? What is the probability that all the UberX’s arrive first?

12- I write a program should print out all the numbers from 1 to 300, but prints out Fizz instead if the number is divisible by 3, Buzz instead if the number is divisible by 5, and FizzBuzz if the number is divisible by 3 and 5. What is the total number of numbers that is either Fizzed, Buzzed, or FizzBuzzed?

13- On a dating site, users can select 5 out of 24 adjectives to describe themselves. A match is declared between two users if they match on at least 4 adjectives. If Alice and Bob randomly pick adjectives, what is the probability that they form a match?

14- A lazy high school senior types up application and envelopes to n different colleges, but puts the applications randomly into the envelopes. What is the expected number of applications that went to the right college

15- Let’s say you have a very tall father. On average, what would you expect the height of his son to be? Taller, equal, or shorter? What if you had a very short father?

16- What’s the expected number of coin flips until you get two heads in a row? What’s the expected number of coin flips until you get two tails in a row?

17- Let’s say we play a game where I keep flipping a coin until I get heads. If the first time I get heads is on the nth coin, then I pay you 2n-1 dollars. How much would you pay me to play this game?

18- You have two coins, one of which is fair and comes up heads with a probability 1/2, and the other which is biased and comes up heads with probability 3/4. You randomly pick coin and flip it twice, and get heads both times. What is the probability that you picked the fair coin?

19- You have a 0.1% chance of picking up a coin with both heads, and a 99.9% chance that you pick up a fair coin. You flip your coin and it comes up heads 10 times. What’s the chance that you picked up the fair coin, given the information that you observed?

Reference: 800 Data Science Questions & Answers doc by

 

Direct download here

Reference: 164 Data Science Interview Questions and Answers by 365 Data Science

Download it here

DataWarehouse Cheat Sheet

What are Differences between Supervised and Unsupervised Learning?

Supervised UnSupervised
Input data is labelled Input data is unlabeled
Split in training/validation/test No split
Used for prediction Used for analysis
Classification and Regression Clustering, dimension reduction, and density estimation

Python Cheat Sheet

Download it here

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Data Sciences Cheat Sheet

Download it here

Panda Cheat Sheet

Download it here

Learn SQL with Practical Exercises

SQL is definitely one of the most fundamental skills needed to be a data scientist.

This is a comprehensive handbook that can help you to learn SQL (Structured Query Language), which could be directly downloaded here

Credit: D Armstrong

Data Visualization: A comprehensive VIP Matplotlib Cheat sheet

Credit: Matplotlib

Download it here

Power BI for Intermediates

Download it here

Credit: Soheil Bakhshi and Bruce Anderson

How to get a job in data science – a semi-harsh Q/A guide.

Python Frameworks for Data Science

Natural Language Processing (NLP) is one of the top areas today.

Some of the applications are:

  • Reading printed text and correcting reading errors
  • Find and replace
  • Correction of spelling mistakes
  • Development of aids
  • Text summarization
  • Language translation
  • and many more.

NLP is a great area if you are planning to work in the area of artificial intelligence.

High Level Look of AI/ML Algorithms

Best Machine Learning Algorithms for Classification: Pros and Cons

Business Analytics in one image

Curated papers, articles, and blogs on data science & machine learning in production from companies like Google, LinkedIn, Uber, Facebook Twitter, Airbnb, and …

  1. Data Quality
  2. Data Engineering
  3. Data Discovery
  4. Feature Stores
  5. Classification
  6. Regression
  7. Forecasting
  8. Recommendation
  9. Search & Ranking
  10. Embeddings
  11. Natural Language Processing
  12. Sequence Modelling
  13. Computer Vision
  14. Reinforcement Learning
  15. Anomaly Detection
  16. Graph
  17. Optimization
  18. Information Extraction
  19. Weak Supervision
  20. Generation
  21. Audio
  22. Validation and A/B Testing
  23. Model Management
  24. Efficiency
  25. Ethics
  26. Infra
  27. MLOps Platforms
  28. Practices
  29. Team Structure
  30. Fails

How to get a job in data science – a semi-harsh Q/A guide.

HOW DO I GET A JOB IN DATA SCIENCE?

Hey you. Yes you, person asking “how do I get a job in data science/analytics/MLE/AI whatever BS job with data in the title?”. I got news for you. There are two simple rules to getting one of these jobs.

Have experience.

Don’t have no experience.

There are approximately 1000 entry level candidates who think they’re qualified because they did a 24 week bootcamp for every entry level job. I don’t need to be a statistician to tell you your odds of landing one of these aren’t great.

HOW DO I GET EXPERIENCE?

Are you currently employed? If not, get a job. If you are, figure out a way to apply data science in your job, then put it on your resume. Mega bonus points here if you can figure out a way to attribute a dollar value to your contribution. Talk to your supervisor about career aspirations at year-end/mid-year reviews. Maybe you’ll find a way to transfer to a role internally and skip the whole resume ignoring phase. Alternatively, network. Be friends with people who are in the roles you want to be in, maybe they’ll help you find a job at their company.

WHY AM I NOT GETTING INTERVIEWS?

IDK. Maybe you don’t have the required experience. Maybe there are 500+ other people applying for the same position. Maybe your resume stinks. If you’re getting 1/20 response rate, you’re doing great. Quit whining.

IS XYZ DEGREE GOOD FOR DATA SCIENCE?

Does your degree involve some sort of non-remedial math higher than college algebra? Does your degree involve taking any sort of programming classes? If yes, congratulations, your degree will pass most base requirements for data science. Is it the best? Probably not, unless you’re CS or some really heavy math degree where half your classes are taught in Greek letters. Don’t come at me with those art history and underwater basket weaving degrees unless you have multiple years experience doing something else.

SHOULD I DO XYZ BOOTCAMP/MICROMASTERS?

Do you have experience? No? This ain’t gonna help you as much as you think it might. Are you experienced and want to learn more about how data science works? This could be helpful.

SHOULD I DO XYZ MASTER’S IN DATA SCIENCE PROGRAM?

Congratulations, doing a Master’s is usually a good idea and will help make you more competitive as a candidate. Should you shell out 100K for one when you can pay 10K for one online? Probably not. In all likelihood, you’re not gonna get $90K in marginal benefit from the more expensive program. Pick a known school (probably avoid really obscure schools, the name does count for a little) and you’ll be fine. Big bonus here if you can sucker your employer into paying for it.

WILL XYZ CERTIFICATE HELP MY RESUME?

Does your certificate say “AWS” or “AZURE” on it? If not, no.

DO I NEED TO KNOW XYZ MATH TOPIC?

Yes. Stop asking. Probably learn probability, be familiar with linear algebra, and understand what the hell a partial derivative is. Learn how to test hypotheses. Ultimately you need to know what the heck is going on math-wise in your predictions otherwise the company is going to go bankrupt and it will be all your fault.

WHAT IF I’M BAD AT MATH?

Do some studying or something. MIT opencourseware has a bunch of free recorded math classes. If you want to learn some Linear Algebra, Gilbert Strang is your guy.

WHAT PROGRAMMING LANGUAGES SHOULD I LEARN?

STOP ASKING THIS QUESTION. I CAN GOOGLE “HOW TO BE A DATA SCIENTIST” AND EVERY SINGLE GARBAGE TDS ARTICLE WILL TELL YOU SQL AND PYTHON/R. YOU’RE LUCKY YOU DON’T HAVE TO DEAL WITH THE JOY OF SEGMENTATION FAULTS TO RUN A SIMPLE LINEAR REGRESSION.

SHOULD I LEARN PYTHON OR R?

Both. Python is more widely used and tends to be more general purpose than R. R is better at statistics and data analysis, but is a bit more niche. Take your pick to start, but ultimately you’re gonna want to learn both you slacker.

SHOULD I MAKE A PORTFOLIO?

Yes. And don’t put some BS housing price regression, iris classification, or titanic survival project on it either. Next question.

WHAT SHOULD I DO AS A PROJECT?

IDK what are you interested in? If you say twitter sentiment stock market prediction go sit in the corner and think about what you just said. Every half brained first year student who can pip install sklearn and do model.fit() has tried unsuccessfully to predict the stock market. The efficient market hypothesis is a thing for a reason. There are literally millions of other free datasets out there you have one of the most powerful search engines at your fingertips to go find them. Pick something you’re interested in, find some data, and analyze it.

DO I NEED TO BE GOOD WITH PEOPLE? (courtesy of /u/bikeskata)

Yes! First, when you’re applying, no one wants to work with a weirdo. You should be able to have a basic conversation with people, and they shouldn’t come away from it thinking you’ll follow them home and wear their skin as a suit. Once you get a job, you’ll be interacting with colleagues, and you’ll need them to care about your analysis. Presumably, there are non-technical people making decisions you’ll need to bring in as well. If you can’t explain to a moderately intelligent person why they should care about the thing that took you 3 days (and cost $$$ in cloud computing costs), you probably won’t have your position for long. You don’t need to be the life of the party, but you should be pleasant to be around.

Credit: u/save_the_panda_bears

Why is columnar storage efficient for analytics workloads?

  • Columnar Storage enables better compression ratios and improves table scans for aggregate and complex queries.
  • Is optimized for scanning large data sets and complex analytics queries
  • Enables a data block to store and compress significantly more values for a column compared to row-based storage
  • Eliminates the need to read redundant data by reading only the columns that you include in your query.
  • Offers overall performance benefits that can help eliminate the need to aggregate data into cubes as in some other OLAP systems.

What are the integrated data sources for Amazon Redshift?

  • AWS DMS
  • Amazon DynamoDB
  • AWS Glue
  • Amazon EMR
  • Amazon Kinesis
  • Amazon S3
  • SSH enabled host

How do you interact with Amazon Redshift?

  • AWS management console
  • AWS CLI
  • AWS SDks
  • Amazon Redshift Query API
  • or SQL Client tools that support JDBC and ODBC protocols

How do you bound a set of data points (fitting, data, Mathematica)?

One of the first things you need to do when fitting a model to data is to ensure that all of your data points are within the range of the model. This is known as “bounding” the data points. There are a few different ways to bound data points, but one of the most commonly used methods is to simply discard any data points that are outside of the range of the model. This can be done manually, but it’s often more convenient to use a tool like Mathematica to automate the process. By bounding your data points, you can be sure that your model will fit the data more accurately.

Any good data scientist knows that fitting a model to data is essential to understanding the underlying patterns in that data. But fitting a model is only half the battle; once you’ve fit a model, you need to determine how well it actually fits the data. This is where bounding comes in.

Bounding allows you to assess how well a given set of data points fits within the range of values predicted by a model. It’s a simple concept, but it can be mathematically complex to actually do. Mathematica makes it easy, though, with its built-in function for fitting and bounding data. Just input your data and let Mathematica do the work for you!

In SQ, What is the Difference between DDL, DCL, and DML?

DDL_vs_DCL_vs_DML
DDL_vs_DCL_vs_DML

Data definition language (DDL) refers to the subset of SQL commands that define data structures and objects such as databases, tables, and views. DDL commands include the following:

• CREATE: used to create a new object.

• DROP: used to delete an object.

• ALTER: used to modify an object.

• RENAME: used to rename an object.

• TRUNCATE: used to remove all rows from a table without deleting the table itself.

Data manipulation language (DML) refers to the subset of SQL commands that are used to work with data. DML commands include the following:

• SELECT: used to request records from one or more tables.

• INSERT: used to insert one or more records into a table.

• UPDATE: used to modify the data of one or more records in a table.

• DELETE: used to delete one or more records from a table.

• EXPLAIN: used to analyze and display the expected execution plan of a SQL statement.

• LOCK: used to lock a table from write operations (INSERT, UPDATE, DELETE) and prevent concurrent operations from conflicting with one another.

Data control language (DCL) refers to the subset of SQL commands that are used to configure permissions to objects. DCL commands include:

• GRANT: used to grant access and permissions to a database or object in a database, such as a schema or table.

• REVOKE: used to remove access and permissions from a database or objects in a database

What is Big Data?

“Big Data is high-volume, high-velocity, and/or high-variety Information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”

What are the 5 Vs of Big Data?

  • Volume
  • Variety: quality of the data
  • Velocity: nature of time in capturing data
  • Variability: measure of consistency in meaning
  • Veracity

What are typical Use Cases of Big Data?

  • Customer segmentation
  • Marketing spend optimization
  • Financial modeling and forecasting
  • Ad targeting and real-time bidding
  • Clickstream analysis
  • Fraud detection

What are example of Data Sources?

  • Relational Databases
  • NoSQL databases
  • Web servers
  • Mobile phones
  • Tablets
  • Data feeds

What are example of Data Formats?

  • Structures, semi-structured, and unstructured
  • Text
  • Binary
  • Streaming and near real-time
  • Batched

Big Data vs Data Warehouses

Big Data is a concept. 

A data warehouse:

  • can be used with both small and large datasets
  • can be used in a Big Data system

How should you split your data up for loading into the data warehouse?

Use the same number of files as you have slices in your cluster, or a multiple of the number of slices.

Why do tables need to be vacuumed?

When values are deleted from a table, Amazon Redshift does not automatically reclaim the space.

Difference Between Amazon Redshift SQl and PostgreSQL

Amzon Redshift SQL is based on PostgreSQl 8.0.2 but has important implementation differences:

  • COPY is highly specialized to enable loading of data from other AWS services and to facilitate automatic compression.
  • VACUUM reclaims disk spce and re-sorts all rows.
  • Some PostgreSQL features, data types, and functions are not supported in Amazon Redshift.

What is the difference between STL tables and STV tables in Redshift?

STL tables contain log data that has been persisted to disk. STV tables contain snapshots of the current system based on transient, in-memory data that is not persisted to disk-based logs or other tables.

How does code compilation affect query performance in Redshift?

The compiled code is cached and available across sessions to speed up subsequent processing of that query.

What is data redistribution in Redshift?

The process of moving data around the cluster to facilitate a join.

What is Dark Data?

Dark data is data that is collected and stored but never used again.

Amazon EMR vs Amazon Redshift

Amazon EMR vs Amazon Redshift
Amazon EMR vs Amazon Redshift

Amazon Redshift Spectrum is the best of both worlds:

  • Can analyze data directly from Amazon S3, like Amazon EMR does
  • Retains efficient processing of higly complex queries, like Amazon Redhsift does
  • And it’s built-in

Data Analytics Ecosystem on AWS:

Data Analytics Ecosystem on AWS
Data Analytics Ecosystem on AWS

Which tasks must be completed before using Amazon Redshift Spectrum?

  • Define an external schema and create tables.

What can be used as a data store for Amazon Redshift Spectrum?

  • Hive Metastore and AWS Glue.

What is the difference between the audit logging feature in Amazon Redshift and Amazon CloudTrail trails?

Redshift Audit logs contain information about database activities. Amazon CloudTrail trails contain information about service activities.

How can you receive notifications about events in your cluster?

Configure an Amazon SNS topic and choose events to trigger the notification to be sent to topic subscribers.

Where does Amazon Redshift store the snapshots used to backup your cluster?

In Amazon S3 bucket.

Benefits of AWS DAS-C01 and AWS MLS-C01 Certifications:

iOS :  https://apps.apple.com/ca/app/aws-data-analytics-sp-exam-pro/id1604021741

 
 
Benefits of AWS DAS-C01 and AWS MLS-C01 Certifications
Benefits of AWS DAS-C01 and AWS MLS-C01 Certifications

AWS Data analytics DAS-C01 Exam Prep
AWS Data analytics DAS-C01 Exam Prep

Cap Theorem:

Cap Theorem
Cap Theorem

Data Warehouse Definition:

Data Warehouse Definitions
Data Warehouse Definitions

Data Warehouse are Subject-Oriented:

Data Warehouse - Subject-area
Data Warehouse – Subject-area

AWS Analytics Services:

Amazon Elastic MapReduce (Amazon EMR) simplifies big data processing by providing a managed Hadoop framework that makes it easy, fast, and cost-effective for you to distribute and process vast amounts of your data across dynamically scalable Amazon Elastic Compute Cloud (Amazon EC2) instances. You can also run other popular distributed frameworks such as Apache Spark and Presto in Amazon EMR, and interact with data in other AWS data stores, such as Amazon S3 and Amazon DynamoDB.

• Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

• Amazon Kinesis is a platform for streaming data on AWS, that offers powerful services that make it easy to load and analyze streaming data, and that also provides the ability for you to build custom streaming data applications for specialized needs.

• Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. When your models are ready, Amazon Machine Learning makes it easy to obtain predictions for your application using simple APIs, without having to implement custom prediction generation code or manage any infrastructure.

• Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy for all employees to build visualizations, perform one-time analysis, and quickly get business insights from their data.

AWS Database Services:

AWS Database Services:
AWS Database Services:

Choosing between NoSQL or SQL Databases:

Amazon SQL vs NoSQL
Amazon SQL vs NoSQL

 

Can you give an example of a successful implementation of an enterprise wide data warehouse solution?

1- DataWarehouse Implementation at Phillips U.S. based division

2- Financial Times

“Amazon Redshift is the single source of truth for our user data. It stores data on customer usage, customer service, and advertising, and then presents those data back to the business in multiple views.” –John O’Donovan, CTO, Financial Times

What is explained variation and unexplained variation in linear regression analysis?

In statistics, explained variation measures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. Often, variation is quantified as variance; then, the more specific term explained variance can be used.

The explained variation is the sum of the squared of the differences between each predicted y-value and the mean of y. The unexplained variation is the sum of the squared of the differences between the y-value of each ordered pair and each corresponding predicted y-value.

Linear regression is a data science technique used to model the relationships between variables. In a linear regression model, the explained variation is the sum of the squared of the differences between each predicted y-value and the mean of y. The unexplained variation is the sum of the squared of the differences between the y-value of each ordered pair and each corresponding predicted y-value. By understanding both the explained and unexplained variation in a linear regression model, data scientists can better understand the data and make more accurate predictions.

In data science, linear regression is a technique used to model the relationships between explanatory variables and a response variable. The goal of linear regression is to find the line of best fit that minimizes the sum of the squared residuals. The residual is the difference between the actual y-value and the predicted y-value. The overall variation in the data can be partitioned into two components: explained variation and unexplained variation. The explained variation is the sum of the squared of the differences between each predicted y-value and the mean of y. The unexplained variation is the sum of the squared of the differences between the y-value of each ordered pair and each corresponding predicted y-value. In other words, explained variation measures how well the line of best fit explains the data, while unexplained variation measures how much error there is in the predictions. In order to create a model that is both predictive and accurate, data scientists must strive to minimize both explained and unexplained variation.

Normalization and Standardization both are rescaling techniques. They make your data unitless

Assume you have 2 feature F1 and F2.

F1 ranges from 0 – 100 , F2 ranges from 0 to 0.10

when you use the algorithm that uses distance as the measure. you encounter a problem.

F1 F2

20 0.2

26 0.2

20 0.9

row 1 – row 2 : (20 -26) + (0.2–0.2) = 6

row1 – row3 : ( 20–20 ) + (0.2 – 0.9) = 0.7

you may conclude row3 is nearest to row1 but its wrong .

right way of calculation is

row1- row2 : (20–26)/100 + (0.2 – .02)/0.10 = 0.06

row1 – row3 : (20–20)/100 + (0.2–0.9)/0.10 = 7

So row2 is the nearest to row1

Normalization brings data between 0- 1

Standardization brings data between 1 standardization

Normalization = ( X – Xmin) / (Xmax – Xmin)

Standardization = (x – µ ) / σ

Regularization is a concent of underfit and overfit

if an error is more in both train data and test data its underfit

if an error is more in test data and less train data it is overfit

Regularization is the way to manage optimal error. Source:  ABC of Data Science

TensorFlow

Tensorflow is an open-source machine learning library developed at Google for numerical computation using data flow graphs is arguably one of the best, with Gmail, Uber, Airbnb, Nvidia, and lots of other prominent brands using it. It’s handy for creating and experimenting with deep learning architectures, and its formulation is convenient for data integration such as inputting graphs, SQL tables, and images together.

Deepchecks

Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. This includes checks related to various types of issues, such as model performance, data integrity, distribution mismatches, and more.

Scikit-learn

Scikit-learn is a very popular open-source machine learning library for the Python programming language. With constant updations in the product for efficiency improvements coupled with the fact that its open-source makes it a go-to framework for machine learning in the industry.

Keras

Keras is an open-source neural network library written in Python. It is capable of running on top of other popular lower-level libraries such as Tensorflow, Theano & CNTK. This one might be your new best friend if you have a lot of data and/or you’re after the state-of-the-art in AI: deep learning.

Pandas

Pandas is yet another open-source software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. Pandas works well with incomplete, messy, and unlabeled data and provides tools for shaping, merging, reshaping, and slicing datasets.

Spark MLib

Spark MLib is a popular machine learning library. As per survey, almost 6% of the data scientists use this library. This library has support for Java, Scala, Python, and R. Also you can use this library on Hadoop, Apache Mesos, Kubernetes, and other cloud services against multiple data sources.

PyTorch

PyTorch is developed by Facebook’s artificial intelligence research group and it is the primary software tool for deep learning after Tensorflow. Unlike TensorFlow, the PyTorch library operates with a dynamically updated graph. This means that it allows you to make changes to the architecture in the process. By Niklas Steiner

Whenever we fit a machine learning algorithm to a dataset, we typically split the dataset into three parts:

1. Training Set: Used to train the model.

2. Validation Set: Used to optimize model parameters.

3. Test Set: Used to get an unbiased estimate of the final model performance.

The following diagram provides a visual explanation of these three different types of datasets:

One point of confusion for students is the difference between the validation set and the test set.

In simple terms, the validation set is used to optimize the model parameters while the test set is used to provide an unbiased estimate of the final model.

It can be shown that the error rate as measured by k-fold cross validation tends to underestimate the true error rate once the model is applied to an unseen dataset.

Thus, we fit the final model to the test set to get an unbiased estimate of what the true error rate will be in the real world.

If you looking for solid way of testing your ML algorithms then I would recommend this open-source interactive demo

Source: ABC of Dat Science and ML

The general answer to your question is : When our model needs it !

Yeah, That’s it!

In detail:


  1. When we feel like, the model we are going to use can’t read the format of data we have. We need to normalise the data.

e.g. When our data is in ‘text’ . We perform – Lemmatization, Stemming, etc to normalize/transform it.

2. Another case would be that, When the values in certain columns(features) do not scale with other features, this may lead to poor performance of our model. We need to normalise our data here as well. ( better say, Features have different Ranges).

e.g Features: F1, F2, F3

range( F1) – 0 – 100

range( F2) – 50 – 100

range( F3) – 900 – 10,000

In the above situation, ,the model would give more importance to F3 ( bigger numerical values). and thus, our model would be biased; resulting in a bad accuracy. Here, We need to apply Scaling ( such as : StandarScaler() func in python, etc.)

Transformation, Scaling; these are some common Normalisation methods.

Go through these two articles to have a better understading:

  1. Understand Data Normalization in Machine Learning
  2. Why Data Normalization is necessary for Machine Learning models

Source: ABC of Data Science and ML

Is it possible to use linear regression for forecasting on non-stationary data (time series)? If yes, then how can we do that? If no, then why not?

 

Linear regression is a machine learning algorithm that can be used to predict future values based on past data points. It is typically used on stationary data, which means that the statistical properties of the data do not change over time. However, it is possible to use linear regression on non-stationary data, with some modifications. The first step is to stationarize the data, which can be done by detrending or differencing the data. Once the data is stationarized, linear regression can be used as usual. However, it is important to keep in mind that the predictions may not be as accurate as they would be if the data were stationary.

Linear regression is a machine learning algorithm that is often used for forecasting. However, it is important to note that linear regression can only be used on stationary data. This means that the data must be free of trend and seasonality. If the data is not stationary, then the forecast will be inaccurate. There are various ways to stationarize data, such as differencing or using a moving average. Once the data is stationarized, linear regression can be used to generate forecasts. However, if the data is non-stationary, then another machine learning algorithm, such as an ARIMA model, should be used instead.

Top 75 Data Science Youtube channel

1- Alex The Analyst
2- Tina Huang
3- Abhishek Thakur
4- Michael Galarnyk
5- How to Get an Analytics Job
6- Ken Jee
7- Data Professor
8- Nicholas Renotte
9- KNN Clips
10- Ternary Data: Data Engineering Consulting
11- AI Basics with Mike
12- Matt Brattin
13- Chronic Coder
14- Intersnacktional
15- Jenny Tumay
16- Coding Professor
17- DataTalksClub
18- Ken’s Nearest Neighbors Podcast
19- Karolina Sowinska
20- Lander Analytics
21- Lights OnData
22- CodeEmporium
23- Andreas Mueller
24- Nate at StrataScratch
25- Kaggle
26- Data Interview Pro
27- Jordan Harrod
28- Leo Isikdogan
29- Jacob Amaral
30- Bukola
31- AndrewMoMoney
32- Andreas Kretz
33- Python Programmer
34- Machine Learning with Phil
35- Art of Visualization
36- Machine Learning University
Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses
 

Data Science and Data Analytics Breaking News – Top Stories

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses
AWS Data analytics DAS-C01 Exam Prep
AWS Data analytics DAS-C01 Exam Prep

AWS Data analytics DAS-C01 on iOS pro

AWS Data analytics DAS-C01 on Android

AWS Data analytics DAS-C01 on Microsoft Windows 10/11:   

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Machine Learning Engineer Interview Questions and Answers

What are some good datasets for Data Science and Machine Learning?

Big Data and Data Analytics 101 – Top 100 AWS Certified Data Analytics Specialty Certification Questions and Answers Dumps

AWS Certified Security – Specialty Questions and Answers Dumps

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Top 100 AWS Certified Data Analytics Specialty Certification Questions and Answers Dumps

 

If you’re looking to take your data analytics career to the next level, then this AWS Data Analytics Specialty Certification Exam Preparation blog is a must-read! With over 100 exam questions and answers, plus data science and data analytics interview questions, cheat sheets and more, you’ll be fully prepared to ace the DAS-C01 exam. 

In this blog, we talk about big data and data analytics; we also give you the last updated top 100 AWS Certified Data Analytics – Specialty Questions and Answers Dumps

Top 100 AWS Certified Data Analytics Specialty Certification Questions and Answers Dumps
AWS Data analytics DAS-C01 Exam Prep

The AWS Certified Data Analytics – Specialty (DAS-C01) examination is intended for individuals who perform in a data analytics-focused role. This exam validates an examinee’s comprehensive understanding of using AWS services to design, build, secure, and maintain analytics solutions that provide insight from data.

Download the App for an interactive experience:

AWS DAS-C01 Exam Prep on iOS

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

The AWS Certified Data Analytics – Specialty (DAS-C01) covers the following domains:

Domain 1: Collection 18%

Domain 2: Storage and Data Management 22%

Domain 3: Processing 24%

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Domain 4: Analysis and Visualization 18%

Domain 5: Security 18%

data analytics specialty
data analytics specialty

Below are the Top 100 AWS Certified Data Analytics – Specialty Questions and Answers Dumps and References

https://enoumen.com/2021/11/07/top-100-data-science-and-data-analytics-interview-questions-and-answers/

 
 

Question1: What combination of services do you need for the following requirements: accelerate petabyte-scale data transfers, load streaming data, and the ability to create scalable, private connections. Select the correct answer order.

A) Snowball, Kinesis Firehose, Direct Connect

B) Data Migration Services, Kinesis Firehose, Direct Connect

C) Snowball, Data Migration Services, Direct Connect

D) Snowball, Direct Connection, Kinesis Firehose

ANSWER1:

A

Notes/Hint1:

AWS has many options to help get data into the cloud, including secure devices like AWS Import/Export Snowball to accelerate petabyte-scale data transfers, Amazon Kinesis Firehose to load streaming data, and scalable private connections through AWS Direct Connect.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Reference1: Big Data Analytics Options 

AWS Data Analytics Specialty Certification Exam Preparation App is a great way to prepare for your upcoming AWS Data Analytics Specialty Certification Exam. The app provides you with over 300 questions and answers, detailed explanations of each answer, a scorecard to track your progress, and a countdown timer to help keep you on track. You can also find data science and data analytics interview questions and detailed answers, cheat sheets, and flashcards to help you study. The app is very similar to the real exam, so you will be well-prepared when it comes time to take the test.

AWS Data analytics DAS-C01 Exam Prep

 

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

ANSWER2:

C

Notes/Hint2:

Reference1: Relationalize PySpark

 

Question 3: There is a five-day car rally race across Europe. The race coordinators are using a Kinesis stream and IoT sensors to monitor the movement of the cars. Each car has a sensor and data is getting back to the stream with the default stream settings. On the last day of the rally, data is sent to S3. When you go to interpret the data in S3, there is only data for the last day and nothing for the first 4 days. Which of the following is the most probable cause of this?

A) You did not have versioning enabled and would need to create individual buckets to prevent the data from being overwritten.

B) Data records are only accessible for a default of 24 hours from the time they are added to a stream.

C) One of the sensors failed, so there was no data to record.

D) You needed to use EMR to send the data to S3; Kinesis Streams are only compatible with DynamoDB.

ANSWER3:

B

Notes/Hint3: 

Streams support changes to the data record retention period of your stream. An Amazon Kinesis stream is an ordered sequence of data records, meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The period from when a record is added to when it is no longer accessible is called the retention period. An Amazon Kinesis stream stores records for 24 hours by default, up to 168 hours.

Reference3: Kinesis Extended Reading

AWS Data analytics DAS-C01 Exam Prep

 

Question 4:  A publisher website captures user activity and sends clickstream data to Amazon Kinesis Data Streams. The publisher wants to design a cost-effective solution to process the data to create a timeline of user activity within a session. The solution must be able to scale depending on the number of active sessions.
Which solution meets these requirements?

A) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on Amazon EC2 instances in an EC2 Auto Scaling group.

B) Include a variable in the clickstream to maintain a counter for each user action during their session. Use the action type as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the
counter. Deploy the consumer application on AWS Lambda.

C) Include a session identifier in the clickstream data from the publisher website and use as the partition key for the stream. Use the Kinesis Client Library (KCL) in the consumer application to retrieve the data from the stream and perform the processing. Deploy the consumer application on Amazon EC2 instances in an
EC2 Auto Scaling group. Use an AWS Lambda function to reshard the stream based upon Amazon CloudWatch alarms.

D) Include a variable in the clickstream data from the publisher website to maintain a counter for the number of active user sessions. Use a timestamp for the partition key for the stream. Configure the consumer application to read the data from the stream and change the number of processor threads based upon the counter. Deploy the consumer application on AWS Lambda.

ANSWER4:

C

Notes/Hint4: 

Partitioning by the session ID will allow a single processor to process all the actions for a user session in order. An AWS Lambda function can call the UpdateShardCount API action to change the number of shards in the stream. The KCL will automatically manage the number of processors to match the number of shards. Amazon EC2 Auto Scaling will assure the correct number of instances are running to meet the processing load.

Reference4: UpdateShardCount API

 

Question 5: Your company has two batch processing applications that consume financial data about the day’s stock transactions. Each transaction needs to be stored durably and guarantee that a record of each application is delivered so the audit and billing batch processing applications can process the data. However, the two applications run separately and several hours apart and need access to the same transaction information. After reviewing the transaction information for the day, the information no longer needs to be stored. What is the best way to architect this application?

A) Use SQS for storing the transaction messages; when the billing batch process performs first and consumes the message, write the code in a way that does not remove the message after consumed, so it is available for the audit application several hours later. The audit application can consume the SQS message and remove it from the queue when completed.

B)  Use Kinesis to store the transaction information. The billing application will consume data from the stream and the audit application can consume the same data several hours later.

C) Store the transaction information in a DynamoDB table. The billing application can read the rows while the audit application will read the rows then remove the data.

D) Use SQS for storing the transaction messages. When the billing batch process consumes each message, have the application create an identical message and place it in a different SQS for the audit application to use several hours later.

SQS would make this more difficult because the data does not need to persist after a full day.

ANSWER5:

B

Notes/Hint5: 

Kinesis appears to be the best solution that allows multiple consumers to easily interact with the records.

Reference5: Amazon Kinesis

Get mobile friendly version of the quiz @ the App Store

AWS DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Question 6: A company is currently using Amazon DynamoDB as the database for a user support application. The company is developing a new version of the application that will store a PDF file for each support case ranging in size from 1–10 MB. The file should be retrievable whenever the case is accessed in the application.
How can the company store the file in the MOST cost-effective manner?

A) Store the file in Amazon DocumentDB and the document ID as an attribute in the DynamoDB table.

B) Store the file in Amazon S3 and the object key as an attribute in the DynamoDB table.

C) Split the file into smaller parts and store the parts as multiple items in a separate DynamoDB table.

D) Store the file as an attribute in the DynamoDB table using Base64 encoding.

ANSWER6:

B

Notes/Hint6: 

Use Amazon S3 to store large attribute values that cannot fit in an Amazon DynamoDB item. Store each file as an object in Amazon S3 and then store the object path in the DynamoDB item.

Reference6: S3 Storage Cost –  DynamODB Storage Cost

 

Question 7: Your client has a web app that emits multiple events to Amazon Kinesis Streams for reporting purposes. Critical events need to be immediately captured before processing can continue, but informational events do not need to delay processing. What solution should your client use to record these types of events without unnecessarily slowing the application?

A) Log all events using the Kinesis Producer Library.

B) Log critical events using the Kinesis Producer Library, and log informational events using the PutRecords API method.

C) Log critical events using the PutRecords API method, and log informational events using the Kinesis Producer Library.

D) Log all events using the PutRecords API method.

ANSWER2:

C

Notes/Hint7: 

The PutRecords API can be used in code to be synchronous; it will wait for the API request to complete before the application continues. This means you can use it when you need to wait for the critical events to finish logging before continuing. The Kinesis Producer Library is asynchronous and can send many messages without needing to slow down your application. This makes the KPL ideal for the sending of many non-critical alerts asynchronously.

Reference7: PutRecords API

AWS Data analytics DAS-C01 Exam Prep

 

Question 8: You work for a start-up that tracks commercial delivery trucks via GPS. You receive coordinates that are transmitted from each delivery truck once every 6 seconds. You need to process these coordinates in near real-time from multiple sources and load them into Elasticsearch without significant technical overhead to maintain. Which tool should you use to digest the data?

A) Amazon SQS

B) Amazon EMR

C) AWS Data Pipeline

D) Amazon Kinesis Firehose

ANSWER8:

D

Notes/Hint8: 

Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near real-time analytics with existing business intelligence tools and dashboards.

Reference8: Amazon Kinesis Firehose

 

Question 9: A company needs to implement a near-real-time fraud prevention feature for its ecommerce site. User and order details need to be delivered to an Amazon SageMaker endpoint to flag suspected fraud. The amount of input data needed for the inference could be as much as 1.5 MB.
Which solution meets the requirements with the LOWEST overall latency?

A) Create an Amazon Managed Streaming for Kafka cluster and ingest the data for each order into a topic. Use a Kafka consumer running on Amazon EC2 instances to read these messages and invoke the Amazon SageMaker endpoint.

B) Create an Amazon Kinesis Data Streams stream and ingest the data for each order into the stream. Create an AWS Lambda function to read these messages and invoke the Amazon SageMaker endpoint.

C) Create an Amazon Kinesis Data Firehose delivery stream and ingest the data for each order into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Trigger an AWS Lambda function with an S3 event notification to read the data and invoke the Amazon SageMaker endpoint.

D) Create an Amazon SNS topic and publish the data for each order to the topic. Subscribe the Amazon SageMaker endpoint to the SNS topic.


ANSWER9:

A

Notes/Hint9: 

An Amazon Managed Streaming for Kafka cluster can be used to deliver the messages with very low latency. It has a configurable message size that can handle the 1.5 MB payload.

Reference9: Amazon Managed Streaming for Kafka cluster

 

Question 10: You need to filter and transform incoming messages coming from a smart sensor you have connected with AWS. Once messages are received, you need to store them as time series data in DynamoDB. Which AWS service can you use?

A) IoT Device Shadow Service

B) Redshift

C) Kinesis

D) IoT Rules Engine

ANSWER10:

D

Notes/Hint10: 

The IoT rules engine will allow you to send sensor data over to AWS services like DynamoDB

Reference10: The IoT rules engine

Get mobile friendly version of the quiz @ the App Store

Question 11: A media company is migrating its on-premises legacy Hadoop cluster with its associated data processing scripts and workflow to an Amazon EMR environment running the latest Hadoop release. The developers want to reuse the Java code that was written for data processing jobs for the on-premises cluster.
Which approach meets these requirements?

A) Deploy the existing Oracle Java Archive as a custom bootstrap action and run the job on the EMR cluster.

B) Compile the Java program for the desired Hadoop version and run it using a CUSTOM_JAR step on the EMR cluster.

C) Submit the Java program as an Apache Hive or Apache Spark step for the EMR cluster.

D) Use SSH to connect the master node of the EMR cluster and submit the Java program using the AWS CLI.


ANSWER11:

B

Notes/Hint11: 

A CUSTOM JAR step can be configured to download a JAR file from an Amazon S3 bucket and execute it. Since the Hadoop versions are different, the Java application has to be recompiled.

Reference11:  Automating analytics workflows on EMR

Question 12: You currently have databases running on-site and in another data center off-site. What service allows you to consolidate to one database in Amazon?

A) AWS Kinesis

B) AWS Database Migration Service

C) AWS Data Pipeline

D) AWS RDS Aurora

ANSWER12:

B

Notes/Hint12: 

AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database.

Reference12: DMS

 

 

Question 13:  An online retail company wants to perform analytics on data in large Amazon S3 objects using Amazon EMR. An Apache Spark job repeatedly queries the same data to populate an analytics dashboard. The analytics team wants to minimize the time to load the data and create the dashboard.
Which approaches could improve the performance? (Select TWO.)


A) Copy the source data into Amazon Redshift and rewrite the Apache Spark code to create analytical reports by querying Amazon Redshift.

B) Copy the source data from Amazon S3 into Hadoop Distributed File System (HDFS) using s3distcp.

C) Load the data into Spark DataFrames.

D) Stream the data into Amazon Kinesis and use the Kinesis Connector Library (KCL) in multiple Spark jobs to perform analytical jobs.

E) Use Amazon S3 Select to retrieve the data necessary for the dashboards from the S3 objects.

ANSWER13:

C and E

Notes/Hint13: 

One of the speed advantages of Apache Spark comes from loading data into immutable dataframes, which can be accessed repeatedly in memory. Spark DataFrames organizes distributed data into columns. This makes summaries and aggregates much quicker to calculate. Also, instead of loading an entire large Amazon S3 object, load only what is needed using Amazon S3 Select. Keeping the data in Amazon S3 avoids loading the large dataset into HDFS.

Reference13: Spark DataFrames 

 

Question 14: You have been hired as a consultant to provide a solution to integrate a client’s on-premises data center to AWS. The customer requires a 300 Mbps dedicated, private connection to their VPC. Which AWS tool do you need?

A) VPC peering

B) Data Pipeline

C) Direct Connect

D) EMR

ANSWER14:

C

Notes/Hint14: 

Direct Connect will provide a dedicated and private connection to an AWS VPC.

Reference14: Direct Connect

AWS Data analytics DAS-C01 Exam Prep

 

Question 15: Your organization has a variety of different services deployed on EC2 and needs to efficiently send application logs over to a central system for processing and analysis. They’ve determined it is best to use a managed AWS service to transfer their data from the EC2 instances into Amazon S3 and they’ve decided to use a solution that will do what?

A) Installs the AWS Direct Connect client on all EC2 instances and uses it to stream the data directly to S3.

B) Leverages the Kinesis Agent to send data to Kinesis Data Streams and output that data in S3.

C) Ingests the data directly from S3 by configuring regular Amazon Snowball transactions.

D) Leverages the Kinesis Agent to send data to Kinesis Firehose and output that data in S3.

ANSWER15:

D

Notes/Hint15: 

Kinesis Firehose is a managed solution, and log files can be sent from EC2 to Firehose to S3 using the Kinesis agent.

Reference15: Kinesis Firehose

 

Question 16: A data engineer needs to create a dashboard to display social media trends during the last hour of a large company event. The dashboard needs to display the associated metrics with a latency of less than 1 minute.
Which solution meets these requirements?

A) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Use Kinesis Data Analytics for SQL Applications to perform a sliding window analysis to compute the metrics and output the results to a Kinesis Data Streams data stream. Configure an AWS Lambda function to save the stream data to an Amazon DynamoDB table. Deploy a real-time dashboard hosted in an Amazon S3 bucket to read and display the metrics data stored in the DynamoDB table.

B) Publish the raw social media data to an Amazon Kinesis Data Firehose delivery stream. Configure the stream to deliver the data to an Amazon Elasticsearch Service cluster with a buffer interval of 0 seconds. Use Kibana to perform the analysis and display the results.

C) Publish the raw social media data to an Amazon Kinesis Data Streams data stream. Configure an AWS Lambda function to compute the metrics on the stream data and save the results in an Amazon S3 bucket. Configure a dashboard in Amazon QuickSight to query the data using Amazon Athena and display the results.

D) Publish the raw social media data to an Amazon SNS topic. Subscribe an Amazon SQS queue to the topic. Configure Amazon EC2 instances as workers to poll the queue, compute the metrics, and save the results to an Amazon Aurora MySQL database. Configure a dashboard in Amazon QuickSight to query the data in Aurora and display the results.


ANSWER16:

A

Notes/Hint16: 

Amazon Kinesis Data Analytics can query data in a Kinesis Data Firehose delivery stream in near-real time using SQL. A sliding window analysis is appropriate for determining trends in the stream. Amazon S3 can host a static webpage that includes JavaScript that reads the data in Amazon DynamoDB and refreshes the dashboard.

Reference16: Amazon Kinesis Data Analytics can query data in a Kinesis Data Firehose delivery stream in near-real time using SQL

 

Question 17: A real estate company is receiving new property listing data from its agents through .csv files every day and storing these files in Amazon S3. The data analytics team created an Amazon QuickSight visualization report that uses a dataset imported from the S3 files. The data analytics team wants the visualization report to reflect the current data up to the previous day. How can a data analyst meet these requirements?

A) Schedule an AWS Lambda function to drop and re-create the dataset daily.

B) Configure the visualization to query the data in Amazon S3 directly without loading the data into SPICE.

C) Schedule the dataset to refresh daily.

D) Close and open the Amazon QuickSight visualization.

ANSWER17:

B

Notes/Hint17:

Datasets created using Amazon S3 as the data source are automatically imported into SPICE. The Amazon QuickSight console allows for the refresh of SPICE data on a schedule.

Reference17: Amazon QuickSight and SPICE

AWS Data analytics DAS-C01 Exam Prep

Question 18: You need to migrate data to AWS. It is estimated that the data transfer will take over a month via the current AWS Direct Connect connection your company has set up. Which AWS tool should you use?

A) Establish additional Direct Connect connections.

B) Use Data Pipeline to migrate the data in bulk to S3.

C) Use Kinesis Firehose to stream all new and existing data into S3.

D) Snowball

ANSWER18:

D

Notes/Hint18:

As a general rule, if it takes more than one week to upload your data to AWS using the spare capacity of your existing Internet connection, then you should consider using Snowball. For example, if you have a 100 Mb connection that you can solely dedicate to transferring your data and need to transfer 100 TB of data, it takes more than 100 days to complete a data transfer over that connection. You can make the same transfer by using multiple Snowballs in about a week.

Reference18: Snowball

 

Question 19: You currently have an on-premises Oracle database and have decided to leverage AWS and use Aurora. You need to do this as quickly as possible. How do you achieve this?

A) It is not possible to migrate an on-premises database to AWS at this time.

B) Use AWS Data Pipeline to create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switchover of your production environment to the new database once the target database is caught up with the source database.

C) Use AWS Database Migration Services and create a target database, migrate the database schema, set up the data replication process, initiate the full load and a subsequent change data capture and apply, and conclude with a switch-over of your production environment to the new database once the target database is caught up with the source database.

D) Use AWS Glue to crawl the on-premises database schemas and then migrate them into AWS with Data Pipeline jobs.

https://aws.amazon.com/dms/faqs/

ANSWER19:

C

Notes/Hint19: 

DMS can efficiently support this sort of migration using the steps outlined. While AWS Glue can help you crawl schemas and store metadata on them inside of Glue for later use, it isn’t the best tool for actually transitioning a database over to AWS itself. Similarly, while Data Pipeline is great for ETL and ELT jobs, it isn’t the best option to migrate a database over to AWS.

Reference19: DMS

 

Question 20: A financial company uses Amazon EMR for its analytics workloads. During the company’s annual security audit, the security team determined that none of the EMR clusters’ root volumes are encrypted. The security team recommends the company encrypt its EMR clusters’ root volume as soon as possible.
Which solution would meet these requirements?

A) Enable at-rest encryption for EMR File System (EMRFS) data in Amazon S3 in a security configuration. Re-create the cluster using the newly created security configuration.

B) Specify local disk encryption in a security configuration. Re-create the cluster using the newly created security configuration.

C) Detach the Amazon EBS volumes from the master node. Encrypt the EBS volume and attach it back to the master node.

D) Re-create the EMR cluster with LZO encryption enabled on all volumes.

ANSWER20:

B

Notes/Hint20: 

Local disk encryption can be enabled as part of a security configuration to encrypt root and storage volumes.

Reference20: EMR Cluster Local disk encryption

Question 21: A company has a clickstream analytics solution using Amazon Elasticsearch Service. The solution ingests 2 TB of data from Amazon Kinesis Data Firehose and stores the latest data collected within 24 hours in an Amazon ES cluster. The cluster is running on a single index that has 12 data nodes and 3 dedicated master nodes. The cluster is configured with 3,000 shards and each node has 3 TB of EBS storage attached. The Data Analyst noticed that the query performance of Elasticsearch is sluggish, and some intermittent errors are produced by the Kinesis Data Firehose when it tries to write to the index. Upon further investigation, there were occasional JVMMemoryPressure errors found in Amazon ES logs.

What should be done to improve the performance of the Amazon Elasticsearch Service cluster?

A) Improve the cluster performance by increasing the number of master nodes of Amazon Elasticsearch.
 
B) Improve the cluster performance by increasing the number of shards of the Amazon Elasticsearch index.
       
C) Improve the cluster performance by decreasing the number of data nodes of Amazon Elasticsearch.
 
D) Improve the cluster performance by decreasing the number of shards of the Amazon Elasticsearch index.
 
ANSWER21:
D
 
Notes/Hint21:
“Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. With Amazon ES, you get direct access to the Elasticsearch APIs; existing code and applications work seamlessly with the service.
 
Each Elasticsearch index is split into some number of shards. You should decide the shard count before indexing your first document. The overarching goal of choosing a number of shards is to distribute an index evenly across all data nodes in the cluster. However, these shards shouldn’t be too large or too numerous.
 
A good rule of thumb is to try to keep a shard size between 10 – 50 GiB. Large shards can make it difficult for Elasticsearch to recover from failure, but because each shard uses some amount of CPU and memory, having too many small shards can cause performance issues and out of memory errors. In other words, shards should be small enough that the underlying Amazon ES instance can handle them, but not so small that they place needless strain on the hardware. Therefore the correct answer is: Improve the cluster performance by decreasing the number of shards of Amazon Elasticsearch index.
 
Reference:  ElasticsSearch
 

Question 22: A data lake is a central repository that enables which operation?

 
A) Store unstructured data from a single data source
 
B) Store structured data from any data source
 
C)  Store structure and unstructured data from any source
 
D) Store structured and unstructured data from a single source
 
ANSWER22:
C
 
Notes/Hint22:
Data lake is a centralized repository for large amounts of structured and unstructured data to enable direct analytics.
 
 
Reference: Data Lakes
 
 

Question 23: What is the most cost-effective storage option for your data lake?

 
A) Amazon EBS
 
B) Amazon S3
 
C) Amazon RDS
 
D) Amazon Redshift
 
ANSWER23:
B
 
 
Notes/Hint23:
Amazon S3
 

Question 24: Which services are used in the processing layer of a data lake architecture? (SELECT TWO)

 
A. AWS Snowball
 
B. AWS Glue
 
C. Amazon EMR
 
D. Amazon QuickSight
 
ANSWER24:
B and C
 
 
Notes/Hint24:
Amazon Glue and Amazon EMR
 

Question 25: Which services can be used for data ingestion into your data lake? (SELECT TWO)

A) Amazon Kinesis Data Firehose

B) Amazon QuickSight

C) Amazon Athena

D) AWS Storage Gateway

ANSWER25:
A and D
 
 
Notes/Hint25:
Amazon Kinesis Data Firehose and  and Amazon Storage Gateway
 
Reference: Data Lakes
 

Question 26: Which service uses continuous data replication with high availability to consolidate databases into a petabyte-scale data warehouse by streaming data to amazon Redshift and Amazon S3?

A) AWS Storage Gateway

B) AWS Schema Conversion Tool

C) AWS Database Migration Service

D) Amazon Kinesis Data Firehose

ANSWER26:
C
 
 
Notes/Hint26:
AWS Database Migration Service
 
Reference: Data Lakes
 

Question 27: What is the AWS Glue Data Catalog?

A) A fully managed ETL (extract, transform, and load) pipeline service

B) A service to schedule jobs

C) A visual data preparation tool

D) An index to the location, schema, and runtime metrics of your data

ANSWER27:
D
 
 
Notes/Hint27:
An index to the location, schema, and runtime metrics of your data
 
Reference: Data Lakes
 

Questions 28: What AWS Glue feature “catalogs” your data?

A) AWS Glue crawler

B) AWS Glue DataBrew

C) AWS Glue Studio

D) AWS Glue Elastic Views

ANSWER28:
A
 
 
Notes/Hint28:
AWS Glue crawler
 
Reference: Data Lakes
 

Question 29: During your data preparation stage, the raw data has been enriched to support additional insights. You need to improve query performance and reduce costs of the final analytics solution.

Which data formats meet these requirements (SELECT TWO)

ANSWER29:
C and D
 
 
Notes/Hint29:
Apache Parquet and Apache ORC
Reference: Data Lakes
 

Question 30: Your small start-uo company is developing a data analytics solution. You need to clean and normalize large datasets, but you do not have developers with the skill set to write custom scripts. Which tool will help efficiently design and run the data preparation activities?

ANSWER30:
B
 
 
Notes/Hint30:
AWS Glue DataBrew
To be able to run analytics, build reports, or apply machine learning, you need to be sure the data you’re using is clean and in the right format. This data preparation step requires data analysts and data scientists to write custom code and perform many manual activities. When cleaning and normalizing data, it is helpful to first review the dataset to understand which possible values are present. Simple visualizations are helpful for determining whether correlations exist between the columns.
 
AWS Glue DataBrew is a visual data preparation tool that helps you clean and normalize data up to 80% faster so you can focus more on the business value you can get. DataBrew provides a visual interface that quickly connects to your data stored in Amazon S3, Amazon Redshift, Amazon Relational Database Service (RDS), any JDBC-accessible data store, or data indexed by the AWS Glue Data Catalog. You can then explore the data, look for patterns, and apply transformations. For example, you can apply joins and pivots, merge different datasets, or use functions to manipulate data.
Reference: Data Lakes
 

Question 30: In which scenario would you use AWS Glue jobs?

A) Analyze data in real-time as data comes into the data lake

B) Transform data in real-time as data comes into the data lake

C) Analyze data in batches on schedule or on demand

D) Transform data in batches on schedule or on demand.

ANSWER30:
D
 
 
Notes/Hint30:
An AWS Glue job encapsulates a script that connects to your source data, processes it, and then writes it out to your data target. Typically, a job runs extract, transform, and load (ETL) scripts. Jobs can also run general-purpose Python scripts (Python shell jobs.) AWS Glue triggers can start jobs based on a schedule or event, or on demand. You can monitor job runs to understand runtime metrics such as completion status, duration, and start tim

Question 31: Your data resides in multiple data stores, including Amazon S3, Amazon RDS, and Amazon DynamoDB. You need to efficiently query the combined datasets.

Which tool can achieve this, using a single query, without moving data?

A) Amazon Athena Federated Query

B) Amazon Redshift Query Editor

C) SQl Workbench

D) AWS Glue DataBrew

ANSWER31:
A
 
 
Notes/Hint31:
With Amazon Athena Federated Query, you can run SQL queries across a variety of relational, non-relational, and custom data sources. You get a unified way to run SQL queries across various data stores. 
 
Athena uses data source connectors that run on AWS Lambda to run federated queries. A data source connector is a piece of code that can translate between your target data source and Athena. You can think of a connector as an extension of Athena’s query engine. Pre-built Athena data source connectors exist for data sources like Amazon CloudWatch Logs, Amazon DynamoDB, Amazon DocumentDB, Amazon RDS, and JDBC-compliant relational data sources such MySQL and PostgreSQL under the Apache 2.0 license. You can also use the Athena Query Federation SDK to write custom connectors. To choose, configure, and deploy a data source connector to your account, you can use the Athena and Lambda consoles or the AWS Serverless Application Repository. After you deploy data source connectors, the connector is associated with a catalog that you can specify in SQL queries. You can combine SQL statements from multiple catalogs and span multiple data sources with a single query.
 

Question 32: Which benefit do you achieve by using AWS Lake Formation to build data lakes?

A) Build data lakes quickly

B) Simplify security management

C) Provide self-service access to data

D) All of the above

ANSWER32:
D
 
 
Notes/Hint32:
Build data lakes quickly
With Lake Formation, you can move, store, catalog, and clean your data faster. You simply point Lake Formation at your data sources, and Lake Formation crawls those sources and moves the data into your new Amazon S3 data lake. Lake Formation organizes data in S3 around frequently used query terms and into right-sized chunks to increase efficiency. Lake Formation also changes data into formats like Apache Parquet and ORC for faster analytics. In addition, Lake Formation has built-in machine learning to deduplicate and find matching records (two entries that refer to the same thing) to increase data quality.
 
Simplify security management
You can use Lake Formation to centrally define security, governance, and auditing policies in one place, versus doing these tasks per service. You can then enforce those policies for your users across their analytics applications. Your policies are consistently implemented, eliminating the need to manually configure them across security services like AWS Identity and Access Management (AWS IAM) and AWS Key Management Service (AWS KMS), storage services like Amazon S3, and analytics and machine learning services like Amazon Redshift, Amazon Athena, and (in beta) Amazon EMR for Apache Spark. This reduces the effort in configuring policies across services and provides consistent enforcement and compliance.
 
Provide self-service access to data
With Lake Formation, you build a data catalog that describes the different available datasets along with which groups of users have access to each. This makes your users more productive by helping them find the right dataset to analyze. By providing a catalog of your data with consistent security enforcement, Lake Formation makes it easier for your analysts and data scientists to use their preferred analytics service. They can use Amazon EMR for Apache Spark (in beta), Amazon Redshift, or Amazon Athena on diverse datasets that are now housed in a single data lake. Users can also combine these services without having to move data between silos.
 
 

Question 33: What are the three stages to set up a data lake using AWS Lake Formation? (SELECT THREE)

A) Register the storage location
B) Create a database
C) Populate the database
D) Grant permissions
 
ANSWER33:
A B and D
 
 
Notes/Hint33:
Register the storage location
Lake Formation manages access to designated storage locations within Amazon S3. Register the storage locations that you want to be part of the data lake.
 
Create a database
Lake Formation organizes data into a catalog of logical databases and tables. Create one or more databases and then automatically generate tables during data ingestion for common workflows.
 
Grant permissions
Lake Formation manages access for IAM users, roles, and Active Directory users and groups via flexible database, table, and column permissions. Grant permissions to one or more resources for your selected users.
 
 
 
Question 34: Which of the following AWS Lake Formation tasks are performed by the AWS Glue service? (SELECT THREE)
 
A) ETL code creation and job monitoring
B) Blueprints to create workflows
C) Data catalog and serverless architecture
D) Simplify securty management
 
ANSWER34:
A B and C
 
 
Notes/Hint34:
Lake Formation leverages a shared infrastructure with AWS Glue, including console controls, ETL code creation and job monitoring, blueprints to create workflows for data ingest, the same data catalog, and a serverless architecture. While AWS Glue focuses on these types of functions, Lake Formation encompasses all AWS Glue features AND provides additional capabilities designed to help build, secure, and manage a data lake. See the AWS Glue features page for more de
 
 

Question 35:  A digital media customer needs to quickly build a data lake solution for the data housed in a PostgreSQL database. As a solutions architect, what service and feature would meet this requirement?

 
A) Copy PostgreSQL data to an Amazon S3 bucket and build a data lake using AWS Lake Formation
B) Use AWS Lake Formation blueprints
C) Build a data lake manually
D) Build an analytics solution by directly accessing the database.
 
ANSWER35:
B
 
 
Notes/Hint35:
A blueprint is a data management template that enables you to easily ingest data into a data lake. Lake Formation provides several blueprints, each for a predefined source type, such as a relational database or AWS CloudTrail logs. From a blueprint, you can create a workflow. Workflows consist of AWS Glue crawlers, jobs, and triggers that are generated to orchestrate the loading and update of data. Blueprints take the data source, data target, and schedule as input to configure the workflow.
 

Question 36: AWS Lake Formation has a set of suggested personas and IAM permissions. Which is a required persona?

 
A) Data lake administrator
B) Data engineer
C) Data analyst
D) Business analyst
 
ANSWER36:
A
 
 
Notes/Hint36:
Data lake administrator (Required)
A user who can register Amazon S3 locations, access the Data Catalog, create databases, create and run workflows, grant Lake Formation permissions to other users, and view AWS CloudTrail logs. The user has fewer IAM permissions than the IAM administrator but enough to administer the data lake. Cannot add other data lake administrators.
 
Data engineer (Optional) A user who can create and run crawlers and workflows and grant Lake Formation permissions on the Data Catalog tables that the crawlers and workflows create.
 
Data analyst (Optional) A user who can run queries against the data lake using, for example, Amazon Athena. The user has only enough permissions to run queries.
 
Business analyst (Optional) Generally, an end-user application specific persona that would query data and resource using a workflow role.
 
 

Question 37: Which three types of blueprints does AWS Lake Formation support? (SELECT THREE)

 
A) ETL code creation and job monitoring
B) Database snapshot
C) Incremental database
D) Log file sources (AWS CloudTrail, ELB/ALB logs)
 
ANSWER37:
B C and D
 
 
Notes/Hint37:
AWS Lake Formation blueprints simplify and automate creating workflows. Lake Formation provides the following types of blueprints:
• Database snapshot – Loads or reloads data from all tables into the data lake from a JDBC source. You can exclude some data from the source based on an exclude pattern.
 
• Incremental database – Loads only new data into the data lake from a JDBC source, based on previously set bookmarks. You specify the individual tables in the JDBC source database to include. For each table, you choose the bookmark columns and bookmark sort order to keep track of data that has previously been loaded. The first time that you run an incremental database blueprint against a set of tables, the workflow loads all data from the tables and sets bookmarks for the next incremental database blueprint run. You can therefore use an incremental database blueprint instead of the database snapshot blueprint to load all data, provided that you specify each table in the data source as a paramete
 
• Log file – Bulk loads data from log file sources, including AWS CloudTrail, Elastic Load Balancing logs, and Application Load Balancer logs.
 

Question 38: Which one of the following is the best description of the capabilities of Amazon QuickSight?

 
A) Automated configuration service build on AWS Glue
B) Fast, serverless, business intelligence service
C) Fast, simple, cost-effective data warehousing
D) Simple, scalable, and serverless data integration
 
ANSWER38:
B C and D
 
 
Notes/Hint38:
B. Scalable, serverless business intelligence service is the correct choice.
See the brief descriptions of several AWS Analytics services below:
AWS Lake Formation Build a secure data lake in days using Glue blueprints and workflows
 
Amazon QuickSight Scalable, serverless, embeddable, ML-powered BI Service built for the cloud
 
Amazon Redshift Analyze all of your data with the fastest and most widely used cloud data warehouse
 
AWS Glue Simple, scalable, and serverless data integration
 

Question 39: Which benefits are provided by Amazon Redshift? (Select TWO)

A) Analyze Data stored in your data lake

B) Maintain performance at scale

C) Focus effort on Data warehouse administration

D) Store all the data to meet analytics need

E) Amazon Redshift includes enterprise-level security and compliance features.

 
ANSWER38:
A and B
 
 
Notes/Hint38:
A is correct – With Amazon Redshift, you can analyze all your data, including exabytes of data stored in your Amazon S3 data lake.
B is correct – Amazon Redshift provides consistent performance at scale.
 
• C is incorrect – Amazon Redshift is a fully managed data warehouse solution. It includes automations to reduce the administrative overhead traditionally associated with data warehouses. When using Amazon Redshift, you can focus your development effort on strategic data analytics solutions.
 
• D is incorrect – With Amazon Redshift features—such as Amazon Redshift Spectrum, materialized views, and federated query—you can analyze data where it is stored in your data lake or AWS databases. This capability provides flexibility to meet new analytics requirements without the cost, time, or complexity of moving large volumes of data between solutions.
 
• Answer E is incorrect – Amazon Redshift includes enterprise-level security and compliance features.
 
 

Djamga Data Sciences Big Data – Data Analytics Youtube Playlist

2- Prepare for Your AWS Certification Exam

3- LinuxAcademy

Big Data – Data Analytics Jobs:

 

Big Data – Data Analytics – Data Sciences Latest News:

DATA ANALYTICS Q&A:

 
 

[/bg_collapse]

Clever Questions, Answers, Resources about:

  • Data Sciences
  • Big Data
  • Data Analytics
  • Data Sciences
  • Databases
  • Data Streams
  • Large DataSets

What Is a Data Scientist?

Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician. – Josh Wills

Data scientists apply sophisticated quantitative and computer science skills to both structure and analyze massive stores or continuous streams of unstructured data, with the intent to derive insights and prescribe action. – Burtch Works Data Science Salary Survey, May 2018

More than anything, what data scientists do is make discoveries while swimming in data… In a competitive landscape where challenges keep changing and data never stop flowing, data scientists help decision makers shift from ad hoc analysis to an ongoing conversation with data. – Data Scientist: The Sexiest Job of the 21st Century, Harvard Business Review

Do All Data Scientists Hold Graduate Degrees?

Data scientists are highly educated. With exceedingly rare exception, every data scientist holds at least an undergraduate degree. 91% of data scientists in 2018 held advanced degrees. The remaining 9% all held undergraduate degrees. Furthermore,

  • 25% of data scientists hold a degree in statistics or mathematics,
  • 20% have a computer science degree,
  • an additional 20% hold a degree in the natural sciences, and
  • 18% hold an engineering degree.

The remaining 17% of surveyed data scientists held degrees in business, social science, or economics.

How Are Data Scientists Different From Data Analysts?

Broadly speaking, the roles differ in scope: data analysts build reports with narrow, well-defined KPIs. Data scientists often to work on broader business problems without clear solutions. Data scientists live on the edge of the known and unknown.

We’ll leave you with a concrete example: A data analyst cares about profit margins. A data scientist at the same company cares about market share.

How Is Data Science Used in Medicine?

Data science in healthcare best translates to biostatistics. It can be quite different from data science in other industries as it usually focuses on small samples with several confounding variables.

How Is Data Science Used in Manufacturing?

Data science in manufacturing is vast; it includes everything from supply chain optimization to the assembly line.

What are data scientists paid?

Most people are attracted to data science for the salary. It’s true that data scientists garner high salaries compares to their peers. There is data to support this: The May 2018 edition of the BurtchWorks Data Science Salary Survey, annual salary statistics were

Note the above numbers do not reflect total compensation which often includes standard benefits and may include company ownership at high levels.

How will data science evolve in the next 5 years?

Will AI replace data scientists?

What is the workday like for a data scientist?

It’s common for data scientists across the US to work 40 hours weekly. While company culture does dictate different levels of work life balance, it’s rare to see data scientists who work more than they want. That’s the virtue of being an expensive resource in a competitive job market.

How do I become a Data Scientist?

The roadmap given to aspiring data scientists can be boiled down to three steps:

  1. Earning an undergraduate and/or advanced degree in computer science, statistics, or mathematics,
  2. Building their portfolio of SQL, Python, and R skills, and
  3. Getting related work experience through technical internships.

All three require a significant time and financial commitment.

There used to be a saying around datascience: The road into a data science starts with two years of university-level math.

What Should I Learn? What Order Do I Learn Them?

This answer assumes your academic background ends with a HS diploma in the US.

  1. Python
  2. Differential Calculus
  3. Integral Calculus
  4. Multivariable Calculus
  5. Linear Algebra
  6. Probability
  7. Statistics

Some follow up questions and answers:

Why Python first?

  • Python is a general purpose language. R is used primarily by statisticians. In the likely scenario that you decide data science requires too much time, effort, and money, Python will be more valuable than your R skills. It’s preparing you to fail, sure, but in the same way a savings account is preparing you to fail.

When do I start working with data?

  • You’ll start working with data when you’ve learned enough Python to do so. Whether you’ll have the tools to have any fun is a much more open-ended question.

How long will this take me?

  • Assuming self-study and average intelligence, 3-5 years from start to finish.

How Do I Learn Python?

If you don’t know the first thing about programming, start with MIT’s course in the curated list.

These modules are the standard tools for data analysis in Python:

Curated Threads & Resources

  1. MIT’s Introduction to Computer Science and Programming in Python A free, archived course taught at MIT in the fall 2016 semester.
  2. Data Scientist with Python Career Track | DataCamp The first courses are free, but unlimited access costs $29/month. Users usually report a positive experience, and it’s one of the better hands-on ways to learn Python.
  3. Sentdex’s (Harrison Kinsley) Youtube Channel Related to Python Programming Tutorials
  4. /r/learnpython is an active sub and very useful for learning the basics.

How Do I Learn R?

If you don’t know the first thing about programming, start with R for Data Science in the curated list.

These modules are the standard tools for data analysis in Python:

Curated Threads & Resources

  1. R for Data Science by Hadley WickhamA free ebook full of succinct code examples. Terrific for learning tidyverse syntax.Folks with some math background may prefer the free alternative, Introduction to Statistical Learning.
  2. Data Scientist with R Career Track | DataCamp The first courses are free, but unlimited access costs $29/month. Users usually report a positive experience, and it’s one of the few hands-on ways to learn R.
  3. R Inferno Learners with a CS background will appreciate this free handbook explaining how and why R behaves the way that it does.

How Do I Learn SQL?

Prioritize the basics of SQL. i.e. when to use functions like POW, SUM, RANK; the computational complexity of the different kinds of joins.

Concepts like relational algebra, when to use clustered/non-clustered indexes, etc. are useful, but (almost) never come up in interviews.

You absolutely do not need to understand administrative concepts like managing permissions.

Finally, there are numerous query engines and therefore numerous dialects of SQL. Use whichever dialect is supported in your chosen resource. There’s not much difference between them, so it’s easy to learn another dialect after you’ve learned one.

Curated Threads & Resources

  1. The SQL Tutorial for Data Analysis | Mode.com
  2. Introduction to Databases A Free MOOC supported by Stanford University.
  3. SQL Queries for Mere MortalsA $30 book highly recommended by /u/karmanujan

How Do I Learn Calculus?

Fortunately (or unfortunately), calculus is the lament of many students, and so resources for it are plentiful. Khan Academy mimics lectures very well, and Paul’s Online Math Notes are a terrific reference full of practice problems and solutions.

Calculus, however, is not just calculus. For those unfamiliar with US terminology,

  • Calculus I is differential calculus.
  • Calculus II is integral calculus.
  • Calculus III is multivariable calculus.
  • Calculus IV is differential equations.

Differential and integral calculus are both necessary for probability and statistics, and should be completed first.

Multivariable calculus can be paired with linear algebra, but is also required.

Differential equations is where consensus falls apart. The short it is, they’re all but necessary for mathematical modeling, but not everyone does mathematical modeling. It’s another tool in the toolbox.

Curated Threads & Resources about Data Science and Data Analytics

How Do I Learn Probability?

Probability is not friendly to beginners. Definitions are rooted in higher mathematics, notation varies from source to source, and solutions are frequently unintuitive. Probability may present the biggest barrier to entry in data science.

It’s best to pick a single primary source and a community for help. If you can spend the money, register for a university or community college course and attend in person.

The best free resource is MIT’s 18.05 Introduction to Probability and Statistics (Spring 2014). Leverage /r/learnmath, /r/learnmachinelearning, and /r/AskStatistics when you get inevitably stuck.

How Do I Learn Linear Algebra?

Curated Threads & Resources https://www.youtube.com/watch?v=fNk_zzaMoSs&index=1&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

What does the typical data science interview process look like?

For general advice, Mastering the DS Interview Loop is a terrific article. The community discussed the article here.

Briefly summarized, most companies follow a five stage process:

  1. Coding Challenge: Most common at software companies and roles contributing to a digital product.
  2. HR Screen
  3. Technical Screen: Often in the form of a project. Less frequently, it takes the form of a whiteboarding session at the onsite.
  4. Onsite: Usually the project from the technical screen is presented here, followed by a meeting with the director overseeing the team you’ll join.
  5. Negotiation & Offer

Preparation:

  1. Practice questions on Leetcode which has both SQL and traditional data structures/algorithm questions
  2. Review Brilliant for math and statistics questions.
  3. SQL Zoo and Mode Analytics both offer various SQL exercises you can solve in your browser.

Tips:

  1. Before you start coding, read through all the questions. This allows your unconscious mind to start working on problems in the background.
  2. Start with the hardest problem first, when you hit a snag, move to the simpler problem before returning to the harder one.
  3. Focus on passing all the test cases first, then worry about improving complexity and readability.
  4. If you’re done and have a few minutes left, go get a drink and try to clear your head. Read through your solutions one last time, then submit.
  5. It’s okay to not finish a coding challenge. Sometimes companies will create unreasonably tedious coding challenges with one-week time limits that require 5–10 hours to complete. Unless you’re desperate, you can always walk away and spend your time preparing for the next interview.

Remember, interviewing is a skill that can be learned, just like anything else. Hopefully, this article has given you some insight on what to expect in a data science interview loop.

The process also isn’t perfect and there will be times that you fail to impress an interviewer because you don’t possess some obscure piece of knowledge. However, with repeated persistence and adequate preparation, you’ll be able to land a data science job in no time!

What does the Airbnb data science interview process look like? [Coming soon]

What does the Facebook data science interview process look like? [Coming soon]

What does the Uber data science interview process look like? [Coming soon]

What does the Microsoft data science interview process look like? [Coming soon]

What does the Google data science interview process look like? [Coming soon]

What does the Netflix data science interview process look like? [Coming soon]

What does the Apple data science interview process look like? [Coming soon]

Question: How is SQL used in real data science jobs?

Real life enterprise databases are orders of magnitude more complex than the “customers, products, orders” examples used as teaching tools. SQL as a language is actually, IMO, a relatively simple language (the db administration component can get complex, but mostly data scientists aren’t doing that anyways). SQL is an incredibly important skill though for any DS role. I think when people emphasize SQL, what they really are talking about is the ability to write queries that interrogate the data and discover the nuances behind how it is collected and/or manipulated by an application before it is written to the dB. For example, is the employee’s phone number their current phone number or does the database store a history of all previous phone numbers? Critically important questions for understanding the nature of your data, and it doesn’t necessarily deal with statistics! The level of syntax required to do this is not that sophisticated, you can get pretty damn far with knowledge of all the joins, group by/analytical functions, filtering and nesting queries. In many cases, the data is too large to just select * and dump into a csv to load into pandas, so you start with SQL against the source. In my mind it’s more important for “SQL skills” to know how to generate hypotheses (that will build up to answering your business question) that can be investigated via a query than it is to be a master of SQL’s syntax. Just my two cents though!

AWS DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Data Visualization example: 12000 Years of Human Population Dynamic

[OC] 12,000 years of human population dynamics from dataisbeautiful

Human population density estimates based on the Hyde 3.2 model.

Capitol insurrection arrests per million people by state

[OC] Capitol insurrection arrests per million people by state from dataisbeautiful

Data Source: Made in Google Sheets using data from this USA Today article (for the number of arrests by arrestee’s home state) and this spreadsheet of the results of the 2020 Census (for the population of each state and DC in 2020, which was used as the denominator in calculating arrests/million people).

AWS Data analytics DAS-C01 Exam Prep

For more information about analytics architecture, visit the AWS Big Data Blog: AWS serverless data analytics pipeline reference architecture here

Basic Data Lake Architecture

Data Analytics Architecture on AWS

Data Analytics Architecture on AWS
Data Analytics Architecture on AWS

Data Analytics Process

Data Analytics Process
Data Analytics Process

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Data Lake Storage:

Data Lake STorage on AWS
Data Lake STorage on AWS – S3

 

AWS DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Event Driven Data Analytics Workflow on AWS

Event Driven Data Analytics Workflow on AWS
Event Driven Data Analytics Workflow on AWS

What is a Data Lake?

AWS Data lake

What is a Data Warehouse?

Data Warehouse
Data Warehouse

What are benefits of a data warehouse?

• Informed decision making

• Consolidated data from many sources

• Historical data analysis

• Data quality, consistency, and accuracy

• Separation of analytics processing from transactional databases

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Data Lake vs Data Warehouse – Comparison

Data Lake vs Data Warehouse comparison

A data warehouse is specially designed for data analytics, which identifies relationships and trends across large amounts of data. A database is used to capture and store data, such as the details of a transaction. Unlike a data warehouse, a data lake is a centralized repository for structured, semi-structured, and unstructured data. A data warehouse organizes data in a tabular format (or schema) that enables SQL queries on the data. But not all applications require data to be in tabular format. Some applications can access data in the data lake even if it is “semi-structured” or unstructured. These include big data analytics, full-text search, and machine learning.

An AWS data lake only has a storage charge for the data. No servers are necessary for the data to be stored and accessed. In the case of Amazon Athena, also, there are no additional charges for processing. Data warehouse enable fast queries of structured data from transactional systems for batch reports, business intelligence, and visualization use cases. A data lake stores data without regard to its structure. Data scientists, data analysts, and business analysts use the data lake. They support use cases such as machine learning, predictive analytics, and data discovery and profiling.

Transactional Data Ingestion

Transactional Data Ingestion on AWS
Transactional Data Ingestion on AWS

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Streaming Data Ingestion on AWS
Streaming Data Ingestion on AWS

Structured Query Language (SQL)

SQL Structured Query Language
SQL Structured Query Language

Data definition language (DDL) refers to the subset of SQL commands that define data structures and objects such as databases, tables, and views. DDL commands include the following:

• CREATE: used to create a new object.

• DROP: used to delete an object.

• ALTER: used to modify an object.

• RENAME: used to rename an object.

• TRUNCATE: used to remove all rows from a table without deleting the table itself.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Data manipulation language (DML) refers to the subset of SQL commands that are used to work with data. DML commands include the following:

• SELECT: used to request records from one or more tables.

• INSERT: used to insert one or more records into a table.

• UPDATE: used to modify the data of one or more records in a table.

• DELETE: used to delete one or more records from a table.

• EXPLAIN: used to analyze and display the expected execution plan of a SQL statement.

• LOCK: used to lock a table from write operations (INSERT, UPDATE, DELETE) and prevent concurrent operations from conflicting with one another.

Data control language (DCL) refers to the subset of SQL commands that are used to configure permissions to objects. DCL commands include:

• GRANT: used to grant access and permissions to a database or object in a database, such as a schema or table.

• REVOKE: used to remove access and permissions from a database or objects in a database.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Comparison of OLTP and OLAP

OLTP vs OLAP
OLTP vs OLAP

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is Amazon Macie?

Amazon Macie
Amazon Macie

Businesses are responsible to identify and limit disclosure of sensitive data such as personally identifiable information (PII) or proprietary information. Identifying and masking sensitive information is time consuming, and becomes more complex in data lakes with various data sources and formats and broad user access to published data sets.

Amazon Macie is a fully managed data security and privacy service that uses machine learning and pattern matching to discover sensitive data in AWS. Macie includes a set of managed data identifiers which automatically detect common types of sensitive data. Examples of managed data identifiers include keywords, credentials, financial information, health information, and PII. You can also configure custom data identifiers using keywords or regular expressions to highlight organizational proprietary data, intellectual property, and other specific scenarios. You can develop security controls that operate at scale to monitor and remediate risk automatically when Macie detects sensitive data. You can use AWS Lambda functions to automatically turn on encryption for an Amazon S3 bucket where Macie detects sensitive data. Or automatically tag datasets containing sensitive data, for inclusion in orchestrated data transformations or audit reports.

Amazon Macie can be integrated into the data ingestion and processing steps of your data pipeline. This approach avoids inadvertent disclosures in published data sets by detecting and addressing the sensitive data as it is ingested and processed. Building the automated detection and processing of sensitive data into your ETL pipelines simplifies and standardizes handling of sensitive data at scale.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is AWS Glue DataBrew?

AWS Glue DataBrew
AWS Glue DataBrew

AWS Glue DataBrew is a visual data preparation tool that simplifies cleaning and normalizing datasets in preparation for use in analytics and machine learning.

• Profile data quality, identifying patterns and automatically detecting anomalies.

• Clean and normalize data using over 250 pre-built transformations, without writing code.

• Visually map the lineage of your data to understand data sources and transformation history.

• Save data cleaning and normalization workflows for automatic application to new data.

Data processed in AWS Glue DataBrew is immediately available for use in analytics and machine learning projects.

Learn more about the built-in transformations available in AWS Glue DataBrew in the Recipe actions reference: https://docs.aws.amazon.com/databrew/latest/dg/recipe-actions-reference.html

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is AWS Glue?

AWS Glue
AWS Glue

AWS Glue is a fully managed ETL (extract, transform, and load) service that makes it simple and cost-effective to categorize your data, clean it, enrich it, and move it reliably between various data stores and data streams. AWS Glue consists of a central metadata repository known as the AWS Glue Data Catalog, an ETL engine that automatically generates Python or Scala code, and a flexible scheduler that handles dependency resolution, job monitoring, and retries. AWS Glue can run your ETL jobs as new data arrives. For example, you can use an AWS Lambda function to trigger your ETL jobs to run as soon as new data becomes available in Amazon S3. You can also register this new dataset in the

AWS Glue Data Catalog as part of your ETL jobs.

AWS Glue is serverless, so there’s no infrastructure to set up or manage.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

AWS Glue Data Catalog The AWS Glue Data Catalog provides a uniform repository where disparate systems can store and find metadata to keep track of data in data silos, and use that metadata to query and transform the data. Once the data is cataloged, it is immediately available for search and query using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.

You can use AWS Identity and Access Management (IAM) policies to control access to the data sources managed by the AWS Glue Data Catalog. The Data Catalog also provides comprehensive audit and governance capabilities, with schema-change tracking and data access controls.

AWS Glue crawler

AWS Glue crawlers can scan data in all kinds of repositories, classify it, extract schema information from it, and store the metadata automatically in the AWS Glue Data Catalog.

AWS Glue ETL

AWS Glue can run your ETL jobs as new data arrives. For example, you can use an AWS Lambda function to trigger your ETL jobs to run as soon as new data becomes available in Amazon S3. You can also register this new dataset in the AWS Glue Data Catalog as part of your ETL jobs.

AWS Glue Studio

AWS Glue Studio provides a graphical interface to create, run, and monitor extract, transform, and load (ETL) jobs in AWS Glue. You can visually compose data transformation workflows and seamlessly run them on AWS Glue’s Apache Spark-based serverless ETL engine. AWS Glue Studio also offers tools to monitor ETL workflows and validate that they are operating as intended.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is Amazon Athena?

Amazon Athena
Amazon Athena: Serverless Query Engine

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to set up or manage, and you can start analyzing data immediately. You don’t even need to load your data into Athena, it works directly with data stored in S3. To get started, just log into the Amazon Athena console, define your schema, and start querying. Athena uses Presto with full standard SQL support. It works with a variety of standard data formats, including CSV, JSON, ORC, Apache Parquet and Avro. While Athena is ideal for quick, ad-hoc querying, it can also handle complex analysis, including large joins, window functions, and arrays.

Amazon Athena helps you analyze data stored in Amazon S3. You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. It can process unstructured, semi-structured, and structured datasets. Examples include CSV, JSON, Avro or columnar data formats such as Apache Parquet and Apache ORC. Athena integrates with Amazon QuickSight for easy visualization. You can also use Athena to generate reports or to explore data with business intelligence tools or SQL clients, connected via an ODBC or JDBC driver.

The tables and databases that you work with in Athena to run queries are based on metadata. Metadata is data about the underlying data in your dataset. How that metadata describes your dataset is called the schema. For example, a table name, the column names in the table, and the data type of each column are schema, saved as metadata, that describe an underlying dataset. In Athena, we call a system for organizing metadata a data catalog or a metastore. The combination of a dataset and the data catalog that describes it is called a data source.

The relationship of metadata to an underlying dataset depends on the type of data source that you work with. Relational data sources like MySQL, PostgreSQL, and SQL Server tightly integrate the metadata with the dataset. In these systems, the metadata is most often written when the data is written. Other data sources, like those built using Hive, allow you to define metadata on-the-fly when you read the dataset. The dataset can be in a variety of formats; for example, CSV, JSON, Parquet, or Avro.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is AWS Lake Formation?

What is AWS Lake Formation?
What is AWS Lake Formation?

Lake Formation is a fully managed service that enables data engineers, security officers, and data analysts to build, secure, manage, and use your data lake

To build your data lake in AWS Lake Formation, you must register an Amazon S3 location as a data lake. The Lake Formation service must have permission to write to the AWS Glue Data Catalog and to Amazon S3 locations in the data lake.

Next, identify the data sources to be ingested. AWS Lake formation can move data into your data lake from existing Amazon S3 data stores. Lake Formation can collect and organize datasets, such as logs from AWS CloudTrail, AWS CloudFront, detailed billing reports, or Elastic Load Balancing. You can ingest bulk or incremental datasets from relational, NoSQL, or non-relational databases. Lake Formation can ingest data from databases running in Amazon RDS or hosted in Amazon EC2. You can also ingest data from on-premises databases using Java Database Connectivity JDBC connectors. You can use custom AWS Glue jobs to load data from other databases or to ingest streaming data using Amazon Kinesis or Amazon DynamoDB.

AWS Lake Formation manages AWS Glue crawlers, AWS Glue ETL jobs, the AWS Glue Data Catalog, security settings, and access control:

• Lake Formation is an automated build environment based on AWS Glue.

• Lake Formation coordinates AWS Glue crawlers to identify datasets within the specified data stores and collect metadata for each dataset

• Lake Formation can perform transformations on your data, such as rewriting and organizing data into a consistent, analytics-friendly format. Lake Formation creates transformation templates and schedules AWS Glue jobs to prepare and optimize your data for analytics. Lake Formation also helps clean your data using FindMatches, an ML-based deduplication transform. AWS Glue jobs encapsulate scripts, such as ETL scripts, which connect to source data, process it, and write it out to a data target. AWS Glue triggers can start jobs based on a schedule or event, or on demand. AWS Glue workflows orchestrate AWS ETL jobs, crawlers, and triggers. You can define a workflow manually or use a blueprint based on commonly ingested data source types.

• The AWS Glue Data Catalog within the data lake persistently stores the metadata from raw and processed datasets. Metadata about data sources and targets is in the form of databases and tables. Tables store information about the underlying data, including schema information, partition information, and data location. Databases are collections of tables. Each AWS account has one data catalog per AWS Region.

• Lake Formation provides centralized access controls for your data lake, including security policy-based rules for users and applications by role. You can authenticate the users and roles using AWS IAM. Once the rules are defined, Lake Formation enforces them with table-and column-level granularity for users of Amazon Redshift Spectrum and Amazon Athena. Rules are enforced at the table-level in AWS Glue, which is normally accessed for administrators.

• Lake Formation leverages the encryption capabilities of Amazon S3 for data in the data lake. This approach provides automatic server-side encryption with keys managed by the AWS Key Management Service (KMS). S3 encrypts data in transit when replicating across Regions. You can separate accounts for source and destination Regions to further protect your data

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is Amazon Quicksight?

Amazon QuickSight is a cloud-scale business intelligence (BI) service. In a single data dashboard, QuickSight gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. QuickSight delivers fast and responsive query performance by using a robust in-memory engine (SPICE).

Scale from tens to tens of thousands of users

Amazon QuickSight has a serverless architecture that automatically scales to tens of thousands of users without the need to setup, configure, or manage your own servers.

Embed BI dashboards in your applications

With QuickSight, you can quickly embed interactive dashboards into your applications, websites, and portals.

Access deeper insights with Machine Learning

QuickSight leverages the proven machine learning (ML) capabilities of AWS. BI teams can perform advanced analytics without prior data science experience.

Ask questions of your data, receive answers

With QuickSight, you can quickly get answers to business questions asked in natural language with QuickSight’s new ML-powered natural language query capability, Q.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

What is SPICE?

SPICE
SPICE is the Super-fast, Parallel, In-memory Calculation Engine in QuickSight.

SPICE is the Super-fast, Parallel, In-memory Calculation Engine in QuickSight. SPICE is engineered to rapidly perform advanced calculations and serve data. The storage and processing capacity available in SPICE speeds up the analytical queries that you run against your imported data. By using SPICE, you save time because you don’t need to retrieve the data every time that you change an analysis or update a visual.

When you import data into a dataset rather than using a direct SQL query, it becomes SPICE data because of how it’s stored. SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest.

When you create or edit a dataset, you choose to use either SPICE or a direct query, unless the dataset contains uploaded files. Importing (also called ingesting) your data into SPICE can save time and money:

• Your analytical queries process faster.

• You don’t need to wait for a direct query to process.

• Data stored in SPICE can be reused multiple times without incurring additional costs. If you use a data source that charges per query, you’re charged for querying the data when you first create the dataset and later when you refresh the dataset.

AWS Data Analytics Specialty Certification DAS-C01 Exam Prep on iOS

AWS DAS-C01 Exam Prep on android

AWS DAS-C01 Exam Prep on Windows

Serverless data lake reference architecture:

Serverless Data lake Reference Architecture
Serverless Data lake Reference Architecture

You can use AWS services as building blocks to build serverless data lakes and analytics pipelines. You can apply best practices on how to ingest, store, transform, and analyze structured and unstructured data at scale. Achieve the scale without needing to manage any storage or compute infrastructure. A decoupled, component-driven architecture allows you to start small and scale out slowly. You can quickly add new purpose-built components to one of six architecture layers to address new requirements and data sources.

This data lake-centric architecture can support business intelligence (BI) dashboarding, interactive SQL queries, big data processing, predictive analytics, and machine learning use cases.

• The ingestion layer includes protocols to support ingestion of structured, unstructured, or streaming data from a variety of sources.

• The storage layer provides durable, scalable, secure, and cost-effective storage of datasets across ingestion and processing.

• The landing zone stores data as ingested.

• Data engineers run initial quality checks to validate and cleanse data in the landing zone, producing the raw dataset.

• The processing layer creates curated datasets by further cleansing, normalizing, standardizing, and enriching data from the raw zone. The curated dataset is typically stored in formats that support performant and cost-effective access by the consumption layer.

• The catalog layer stores business and technical metadata about the datasets hosted in the storage layer.

• The consumption layer contains functionality for Search, Analytics, and Visualization. It integrates with the data lake storage, cataloging, and security layers. This integration supports analysis methods such as SQL, batch analytics, BI dashboards, reporting, and ML.

• The security and monitoring layer protects data within the storage layer and other resources in the data lake. This layer includes access control, encryption, network protection, usage monitoring, and auditing.

You can learn more about this reference architecture at AWS Big Data Blog: AWS serverless data analytics pipeline reference architecture: https://aws.amazon.com/blogs/big-data/aws-serverless-data-analytics-pipeline-reference-architecture/

What are Data Lakes Best Practices?

What are Data Lakes Best Practices?
What are Data Lakes Best Practices?

The main challenge with a data lake architecture is that raw data is stored with no oversight of the contents. To make the data usable, you must have defined mechanisms to catalog and secure the data. Without these mechanisms, data cannot be found or trusted, resulting in a “data swamp.” Meeting the needs of diverse stakeholders requires data lakes to have governance, semantic consistency, and access controls.

The Analytics Lens for the AWS Well-Architected Framework covers common analytics applications scenarios, including data lakes. It identifies key elements to help you architect your data lake according to best practices, including the following configuration notes:

• Decide on a location for data lake ingestion (that is, an S3 bucket). Select a frequency and isolation mechanism that meets your business needs.

• For Tier 2 Data, partition the data with keys that align to common query filter

. This enables pruning by common analytics tools that work on raw data files and increases performance

• Choose optimal file sizes to reduce Amazon S3 round trips during compute environment ingestion. Recommended: 512 MB – 1 GB in a columnar format (ORC/Parquet) per partition.

• Perform frequent scheduled compactions that align to the optimal file sizes noted previously. For example, compact into daily partitions if hourly files are too small.

• For data with frequent updates or deletes (that is, mutable data), either: o Temporarily store replicated data to a database like Amazon Redshift, Apache Hive, or Amazon RDS. Once the data becomes static, and then offload it to Amazon S3. Or, o Append the data to delta files per partition and compact it on a scheduled basis. You can use AWS Glue or Apache Spark on Amazon EMR for this processing.

With Tier 2 and Tier 3 Data being stored in Amazon S3, partition data using a high cardinality key. This is honored by Presto, Apache Hive, and Apache Spark and improves the query filter performance on that key

• Sort data in each partition with a secondary key that aligns to common filter queries. This allows query engines to skip files and get to requested data faster. For more information on the Analytics Lens for the AWS Well-Architected Framework, visit https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/data-lake.html

References:

For additional information on AWS data lakes and data analytics architectures, visit:

• AWS Well-Architected: Learn, measure, and build using architectural best practices: https://aws.amazon.com/architecture/well-architected

• AWS Lake Formation: Build a secure data lake in days: https://aws.amazon.com/lake-formation

• Getting Started with Amazon S3: https://aws.amazon.com/s3/getting-started

• Security in AWS Lake Formation: https://docs.aws.amazon.com/lake-formation/latest/dg/security.html 

AWS Lake Formation: How It Works: https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html

• AWS Lake Formation Dashboard: https://us-west-2.console.aws.amazon.com/lakeformation

• Data Lake Storage on AWS: https://aws.amazon.com/products/storage/data-lake-storage/

• Building Big Data Storage Solutions (Data Lakes) for Maximum Flexibility: https://docs.aws.amazon.com/whitepapers/latest/building-data-lakes/building-data-lake-aws.html

• Data Ingestion Methods: https://docs.aws.amazon.com/whitepapers/latest/building-data-lakes/data-ingestion-methods.html

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!